Karayiannis, N B
2000-01-01
This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.
Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms
Roald, Line Alnaes; Andersson, Goran
2017-08-29
Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less
On Reformulating Planning as Dynamic Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Jonsson, Ari K.; Morris, Paul; Koga, Dennis (Technical Monitor)
2000-01-01
In recent years, researchers have reformulated STRIPS planning problems as SAT problems or CSPs. In this paper, we discuss the Constraint-Based Interval Planning (CBIP) paradigm, which can represent planning problems incorporating interval time and resources. We describe how to reformulate mutual exclusion constraints for a CBIP-based system, the Extendible Uniform Remote Operations Planner Architecture (EUROPA). We show that reformulations involving dynamic variable domains restrict the algorithms which can be used to solve the resulting DCSP. We present an alternative formulation which does not employ dynamic domains, and describe the relative merits of the different reformulations.
Reformulating Constraints for Compilability and Efficiency
NASA Technical Reports Server (NTRS)
Tong, Chris; Braudaway, Wesley; Mohan, Sunil; Voigt, Kerstin
1992-01-01
KBSDE is a knowledge compiler that uses a classification-based approach to map solution constraints in a task specification onto particular search algorithm components that will be responsible for satisfying those constraints (e.g., local constraints are incorporated in generators; global constraints are incorporated in either testers or hillclimbing patchers). Associated with each type of search algorithm component is a subcompiler that specializes in mapping constraints into components of that type. Each of these subcompilers in turn uses a classification-based approach, matching a constraint passed to it against one of several schemas, and applying a compilation technique associated with that schema. While much progress has occurred in our research since we first laid out our classification-based approach [Ton91], we focus in this paper on our reformulation research. Two important reformulation issues that arise out of the choice of a schema-based approach are: (1) compilability-- Can a constraint that does not directly match any of a particular subcompiler's schemas be reformulated into one that does? and (2) Efficiency-- If the efficiency of the compiled search algorithm depends on the compiler's performance, and the compiler's performance depends on the form in which the constraint was expressed, can we find forms for constraints which compile better, or reformulate constraints whose forms can be recognized as ones that compile poorly? In this paper, we describe a set of techniques we are developing for partially addressing these issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhou, Xinyang; Liu, Zhiyuan
This paper considers distribution networks with distributed energy resources and discrete-rate loads, and designs an incentive-based algorithm that allows the network operator and the customers to pursue given operational and economic objectives, while concurrently ensuring that voltages are within prescribed limits. Four major challenges include: (1) the non-convexity from discrete decision variables, (2) the non-convexity due to a Stackelberg game structure, (3) unavailable private information from customers, and (4) different update frequency from two types of devices. In this paper, we first make convex relaxation for discrete variables, then reformulate the non-convex structure into a convex optimization problem together withmore » pricing/reward signal design, and propose a distributed stochastic dual algorithm for solving the reformulated problem while restoring feasible power rates for discrete devices. By doing so, we are able to statistically achieve the solution of the reformulated problem without exposure of any private information from customers. Stability of the proposed schemes is analytically established and numerically corroborated.« less
Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm
NASA Technical Reports Server (NTRS)
Povitsky, A.
1998-01-01
In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.
Parallel solution of sparse one-dimensional dynamic programming problems
NASA Technical Reports Server (NTRS)
Nicol, David M.
1989-01-01
Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.
NASA Technical Reports Server (NTRS)
Warner, James E.; Zubair, Mohammad; Ranjan, Desh
2017-01-01
This work investigates novel approaches to probabilistic damage diagnosis that utilize surrogate modeling and high performance computing (HPC) to achieve substantial computational speedup. Motivated by Digital Twin, a structural health management (SHM) paradigm that integrates vehicle-specific characteristics with continual in-situ damage diagnosis and prognosis, the methods studied herein yield near real-time damage assessments that could enable monitoring of a vehicle's health while it is operating (i.e. online SHM). High-fidelity modeling and uncertainty quantification (UQ), both critical to Digital Twin, are incorporated using finite element method simulations and Bayesian inference, respectively. The crux of the proposed Bayesian diagnosis methods, however, is the reformulation of the numerical sampling algorithms (e.g. Markov chain Monte Carlo) used to generate the resulting probabilistic damage estimates. To this end, three distinct methods are demonstrated for rapid sampling that utilize surrogate modeling and exploit various degrees of parallelism for leveraging HPC. The accuracy and computational efficiency of the methods are compared on the problem of strain-based crack identification in thin plates. While each approach has inherent problem-specific strengths and weaknesses, all approaches are shown to provide accurate probabilistic damage diagnoses and several orders of magnitude computational speedup relative to a baseline Bayesian diagnosis implementation.
NASA Astrophysics Data System (ADS)
Darlow, Luke N.; Akhoury, Sharat S.; Connan, James
2015-02-01
Standard surface fingerprint scanners are vulnerable to counterfeiting attacks and also failure due to skin damage and distortion. Thus a high security and damage resistant means of fingerprint acquisition is needed, providing scope for new approaches and technologies. Optical Coherence Tomography (OCT) is a high resolution imaging technology that can be used to image the human fingertip and allow for the extraction of a subsurface fingerprint. Being robust toward spoofing and damage, the subsurface fingerprint is an attractive solution. However, the nature of the OCT scanning process induces speckle: a correlative and multiplicative noise. Six speckle reducing filters for the digital enhancement of OCT fingertip scans have been evaluated. The optimized Bayesian non-local means algorithm improved the structural similarity between processed and reference images by 34%, increased the signal-to-noise ratio, and yielded the most promising visual results. An adaptive wavelet approach, originally designed for ultrasound imaging, and a speckle reducing anisotropic diffusion approach also yielded promising results. A reformulation of these in future work, with an OCT-speckle specific model, may improve their performance.
Maurer, S A; Kussmann, J; Ochsenfeld, C
2014-08-07
We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N⁵) to O(N³) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows to replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, S. A.; Kussmann, J.; Ochsenfeld, C., E-mail: Christian.Ochsenfeld@cup.uni-muenchen.de
2014-08-07
We present a low-prefactor, cubically scaling scaled-opposite-spin second-order Møller-Plesset perturbation theory (SOS-MP2) method which is highly suitable for massively parallel architectures like graphics processing units (GPU). The scaling is reduced from O(N{sup 5}) to O(N{sup 3}) by a reformulation of the MP2-expression in the atomic orbital basis via Laplace transformation and the resolution-of-the-identity (RI) approximation of the integrals in combination with efficient sparse algebra for the 3-center integral transformation. In contrast to previous works that employ GPUs for post Hartree-Fock calculations, we do not simply employ GPU-based linear algebra libraries to accelerate the conventional algorithm. Instead, our reformulation allows tomore » replace the rate-determining contraction step with a modified J-engine algorithm, that has been proven to be highly efficient on GPUs. Thus, our SOS-MP2 scheme enables us to treat large molecular systems in an accurate and efficient manner on a single GPU-server.« less
Comparison of Vocal Vibration-Dose Measures for Potential-Damage Risk Criteria
ERIC Educational Resources Information Center
Titze, Ingo R.; Hunter, Eric J.
2015-01-01
Purpose: School-teachers have become a benchmark population for the study of occupational voice use. A decade of vibration-dose studies on the teacher population allows a comparison to be made between specific dose measures for eventual assessment of damage risk. Method: Vibration dosimetry is reformulated with the inclusion of collision stress.…
Operational Planning for Multiple Heterogeneous Unmanned Aerial Vehicles in Three Dimensions
2009-06-01
human input in the planning process. Two solution methods are presented: (1) a mixed-integer program, and (2) an algorithm that utilizes a metaheuristic ...and (2) an algorithm that utilizes a metaheuristic to generate composite variables for a linear program, called the Composite Operations Planning...that represent a path and an associated type of UAV. The reformulation is incorporated into an algorithm that uses a metaheuristic to generate the
Fast Transformation of Temporal Plans for Efficient Execution
NASA Technical Reports Server (NTRS)
Tsamardinos, Ioannis; Muscettola, Nicola; Morris, Paul
1998-01-01
Temporal plans permit significant flexibility in specifying the occurrence time of events. Plan execution can make good use of that flexibility. However, the advantage of execution flexibility is counterbalanced by the cost during execution of propagating the time of occurrence of events throughout the flexible plan. To minimize execution latency, this propagation needs to be very efficient. Previous work showed that every temporal plan can be reformulated as a dispatchable plan, i.e., one for which propagation to immediate neighbors is sufficient. A simple algorithm was given that finds a dispatchable plan with a minimum number of edges in cubic time and quadratic space. In this paper, we focus on the efficiency of the reformulation process, and improve on that result. A new algorithm is presented that uses linear space and has time complexity equivalent to Johnson s algorithm for all-pairs shortest-path problems. Experimental evidence confirms the practical effectiveness of the new algorithm. For example, on a large commercial application, the performance is improved by at least two orders of magnitude. We further show that the dispatchable plan, already minimal in the total number of edges, can also be made minimal in the maximum number of edges incoming or outgoing at any node.
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Rao, Akshay; Elara, Mohan Rajesh; Elangovan, Karthikeyan
This paper aims to develop a local path planning algorithm for a bio-inspired, reconfigurable crawling robot. A detailed description of the robotic platform is first provided, and the suitability for deployment of each of the current state-of-the-art local path planners is analyzed after an extensive literature review. The Enhanced Vector Polar Histogram algorithm is described and reformulated to better fit the requirements of the platform. The algorithm is deployed on the robotic platform in crawling configuration and favorably compared with other state-of-the-art local path planning algorithms.
Quantum algorithm for support matrix machines
NASA Astrophysics Data System (ADS)
Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan
2017-09-01
We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.
Reformulations of practice: beyond experience in paramedic airway management.
Mausz, Justin; Donovan, Seanan; McConnell, Meghan; Lapalme, Corey; Webb, Andrea; Feres, Elizabeth; Tavares, Walter
2017-07-01
"Deliberate practice" and "feedback" are necessary for the development of expertise. We explored clinical performance in settings where these features are inconsistent or limited, hypothesizing that even in algorithmic domains of practice, clinical performance reformulates in ways that may threaten patient safety, and that experience fails to predict performance. Paramedics participated in two recorded simulation sessions involving airway management, which were analyzed three ways: first, we identified variations in "decision paths" by coding the actions of the participants according to an airway management algorithm. Second, we identified cognitive schemas driving behavior using qualitative descriptive analysis. Third, clinical performances were evaluated using a global rating scale, checklist, and time to achieve ventilation; the relationship between experience and these metrics was assessed using Pearson's correlation. Thirty participants completed a total of 59 simulations. Mean experience was 7.2 (SD=5.8) years. We observed highly variable practice patterns and identified idiosyncratic decision paths and schemas governing practice. We revealed problematic performance deficiencies related to situation awareness, decision making, and procedural skills. There was no association between experience and clinical performance (Scenario 1: r=0.13, p=0.47; Scenario 2: r=-0.10, p=0.58), or the number of errors (Scenario 1: r=.10, p=0.57; Scenario 2: r=0.25, p=0.17) or the time to achieve ventilation (Scenario 1: r=0.53, p=0.78; Scenario 2: r=0.27, p=0.15). Clinical performance was highly variable when approaching an algorithmic problem, and procedural and cognitive errors were not attenuated by provider experience. These findings suggest reformulations of practice emerge in settings where feedback and deliberate practice are limited.
Variance decomposition in stochastic simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Maître, O. P., E-mail: olm@limsi.fr; Knio, O. M., E-mail: knio@duke.edu; Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance.more » Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.« less
NASA Astrophysics Data System (ADS)
Ochsenfeld, Christian; Head-Gordon, Martin
1997-05-01
To exploit the exponential decay found in numerical studies for the density matrix and its derivative with respect to nuclear displacements, we reformulate the coupled perturbed self-consistent field (CPSCF) equations and a quadratically convergent SCF (QCSCF) method for Hartree-Fock and density functional theory within a local density matrix-based scheme. Our D-CPSCF (density matrix-based CPSCF) and D-QCSCF schemes open the way for exploiting sparsity and to achieve asymptotically linear scaling of computational complexity with molecular size ( M), in case of D-CPSCF for all O( M) derivative densities. Furthermore, these methods are even for small molecules strongly competitive to conventional algorithms.
Institute for Defense Analysis. Annual Report 1995.
1995-01-01
staff have been involved in the community-wide development of MPI as well as in its application to specific NSA problems. 35 Parallel Groebner ...Basis Code — Symbolic Computing on Parallel Machines The Groebner basis method is a set of algorithms for reformulating very complex algebraic expres
Parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Amin-Javaheri, Masoud; Orin, David E.
1989-01-01
The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.
Block clustering based on difference of convex functions (DC) programming and DC algorithms.
Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai
2013-10-01
We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.
A Fresh Math Perspective Opens New Possibilities for Computational Chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vu, Linda; Govind, Niranjan; Yang, Chao
2017-05-26
By reformulating the TDDFT problem as a matrix function approximation, making use of a special transformation and taking advantage of the underlying symmetry with respect to a non-Euclidean metric, Yang and his colleagues were able to apply the Lanczos algorithm and a Kernal Polynomial Method (KPM) to approximate the absorption spectrum of several molecules. Both of these algorithms require relatively low-memory compared to non-symmetrical alternatives, which is the key to the computational savings.
NASA Astrophysics Data System (ADS)
Pennington, Robert S.; Van den Broek, Wouter; Koch, Christoph T.
2014-05-01
We have reconstructed third-dimension specimen information from convergent-beam electron diffraction (CBED) patterns simulated using the stacked-Bloch-wave method. By reformulating the stacked-Bloch-wave formalism as an artificial neural network and optimizing with resilient back propagation, we demonstrate specimen orientation reconstructions with depth resolutions down to 5 nm. To show our algorithm's ability to analyze realistic data, we also discuss and demonstrate our algorithm reconstructing from noisy data and using a limited number of CBED disks. Applicability of this reconstruction algorithm to other specimen parameters is discussed.
Reduced order feedback control equations for linear time and frequency domain analysis
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1981-01-01
An algorithm was developed which can be used to obtain the equations. In a more general context, the algorithm computes a real nonsingular similarity transformation matrix which reduces a real nonsymmetric matrix to block diagonal form, each block of which is a real quasi upper triangular matrix. The algorithm works with both defective and derogatory matrices and when and if it fails, the resultant output can be used as a guide for the reformulation of the mathematical equations that lead up to the ill conditioned matrix which could not be block diagonalized.
Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm
NASA Astrophysics Data System (ADS)
Hasançebi, O.; Kazemzadeh Azad, S.
2014-01-01
This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.
Fast Algorithms for Earth Mover Distance Based on Optimal Transport and L1 Regularization II
2016-09-01
of optimal transport, the EMD problem can be reformulated as a familiar L1 minimization. We use a regularization which gives us a unique solution for...plays a central role in many applications, including image processing, computer vision and statistics etc. [13, 17, 20, 24]. The EMD is a metric defined
Fast Algorithms for Earth Mover’s Distance Based on Optimal Transport and L1 Type Regularization I
2016-09-01
which EMD can be reformulated as a familiar homogeneous degree 1 regularized minimization. The new minimization problem is very similar to problems which...which is also named the Monge problem or the Wasserstein metric, plays a central role in many applications, including image processing, computer vision
Battery Control Algorithms | Transportation Research | NREL
publications. Accounting for Lithium-Ion Battery Degradation in Electric Vehicle Charging Optimization Advanced Reformulation of Lithium-Ion Battery Models for Enabling Electric Transportation Fail-Safe Design for Large Capacity Lithium-Ion Battery Systems Contact Ying Shi Email | 303-275-4240
NASA Technical Reports Server (NTRS)
Reed, Kenneth W.
1992-01-01
A new hybrid stress finite element algorithm suitable for analyses of large quasistatic deformation of inelastic solids is presented. Principal variables in the formulation are the nominal stress rate and spin. The finite element equations which result are discrete versions of the equations of compatibility and angular momentum balance. Consistent reformulation of the constitutive equation and accurate and stable time integration of the stress are discussed at length. Examples which bring out the feasibility and performance of the algorithm conclude the work.
NASA Astrophysics Data System (ADS)
Komachi, Mamoru; Kudo, Taku; Shimbo, Masashi; Matsumoto, Yuji
Bootstrapping has a tendency, called semantic drift, to select instances unrelated to the seed instances as the iteration proceeds. We demonstrate the semantic drift of Espresso-style bootstrapping has the same root as the topic drift of Kleinberg's HITS, using a simplified graph-based reformulation of bootstrapping. We confirm that two graph-based algorithms, the von Neumann kernels and the regularized Laplacian, can reduce the effect of semantic drift in the task of word sense disambiguation (WSD) on Senseval-3 English Lexical Sample Task. Proposed algorithms achieve superior performance to Espresso and previous graph-based WSD methods, even though the proposed algorithms have less parameters and are easy to calibrate.
NASA Astrophysics Data System (ADS)
Nasser Eddine, Achraf; Huard, Benoît; Gabano, Jean-Denis; Poinot, Thierry
2018-06-01
This paper deals with the initialization of a non linear identification algorithm used to accurately estimate the physical parameters of Lithium-ion battery. A Randles electric equivalent circuit is used to describe the internal impedance of the battery. The diffusion phenomenon related to this modeling is presented using a fractional order method. The battery model is thus reformulated into a transfer function which can be identified through Levenberg-Marquardt algorithm to ensure the algorithm's convergence to the physical parameters. An initialization method is proposed in this paper by taking into account previously acquired information about the static and dynamic system behavior. The method is validated using noisy voltage response, while precision of the final identification results is evaluated using Monte-Carlo method.
Metaheuristic Optimization and its Applications in Earth Sciences
NASA Astrophysics Data System (ADS)
Yang, Xin-She
2010-05-01
A common but challenging task in modelling geophysical and geological processes is to handle massive data and to minimize certain objectives. This can essentially be considered as an optimization problem, and thus many new efficient metaheuristic optimization algorithms can be used. In this paper, we will introduce some modern metaheuristic optimization algorithms such as genetic algorithms, harmony search, firefly algorithm, particle swarm optimization and simulated annealing. We will also discuss how these algorithms can be applied to various applications in earth sciences, including nonlinear least-squares, support vector machine, Kriging, inverse finite element analysis, and data-mining. We will present a few examples to show how different problems can be reformulated as optimization. Finally, we will make some recommendations for choosing various algorithms to suit various problems. References 1) D. H. Wolpert and W. G. Macready, No free lunch theorems for optimization, IEEE Trans. Evolutionary Computation, Vol. 1, 67-82 (1997). 2) X. S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, (2008). 3) X. S. Yang, Mathematical Modelling for Earth Sciences, Dunedin Academic Press, (2008).
GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING
Liu, Hongcheng; Yao, Tao; Li, Runze
2015-01-01
This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126
Optimization-based additive decomposition of weakly coercive problems with applications
Bochev, Pavel B.; Ridzal, Denis
2016-01-27
In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less
NASA Astrophysics Data System (ADS)
Levin, Alan R.; Zhang, Deyin; Polizzi, Eric
2012-11-01
In a recent article Polizzi (2009) [15], the FEAST algorithm has been presented as a general purpose eigenvalue solver which is ideally suited for addressing the numerical challenges in electronic structure calculations. Here, FEAST is presented beyond the “black-box” solver as a fundamental modeling framework which can naturally address the original numerical complexity of the electronic structure problem as formulated by Slater in 1937 [3]. The non-linear eigenvalue problem arising from the muffin-tin decomposition of the real-space domain is first derived and then reformulated to be solved exactly within the FEAST framework. This new framework is presented as a fundamental and practical solution for performing both accurate and scalable electronic structure calculations, bypassing the various issues of using traditional approaches such as linearization and pseudopotential techniques. A finite element implementation of this FEAST framework along with simulation results for various molecular systems is also presented and discussed.
Arbitrary-step randomly delayed robust filter with application to boost phase tracking
NASA Astrophysics Data System (ADS)
Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang
2018-04-01
The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.
Potential for improvement of population diet through reformulation of commonly eaten foods.
van Raaij, Joop; Hendriksen, Marieke; Verhagen, Hans
2009-03-01
FOOD REFORMULATION: Reformulation of foods is considered one of the key options to achieve population nutrient goals. The compositions of many foods are modified to assist the consumer bring his or her daily diet more in line with dietary recommendations. INITIATIVES ON FOOD REFORMULATION: Over the past few years the number of reformulated foods introduced on the European market has increased enormously and it is expected that this trend will continue for the coming years. LIMITS TO FOOD REFORMULATION: Limitations to food reformulation in terms of choice of foods appropriate for reformulation and level of feasible reformulation relate mainly to consumer acceptance, safety aspects, technological challenges and food legislation. IMPACT ON KEY NUTRIENT INTAKE AND HEALTH: The potential impact of reformulated foods on key nutrient intake and health is obvious. Evaluation of the actual impact requires not only regular food consumption surveys, but also regular updates of the food composition table including the compositions of newly launched reformulated foods.
Nallasivam, Ulaganathan; Shah, Vishesh H.; Shenvi, Anirudh A.; ...
2016-02-10
We present a general Global Minimization Algorithm (GMA) to identify basic or thermally coupled distillation configurations that require the least vapor duty under minimum reflux conditions for separating any ideal or near-ideal multicomponent mixture into a desired number of product streams. In this algorithm, global optimality is guaranteed by modeling the system using Underwood equations and reformulating the resulting constraints to bilinear inequalities. The speed of convergence to the globally optimal solution is increased by using appropriate feasibility and optimality based variable-range reduction techniques and by developing valid inequalities. As a result, the GMA can be coupled with already developedmore » techniques that enumerate basic and thermally coupled distillation configurations, to provide for the first time, a global optimization based rank-list of distillation configurations.« less
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brabec, Jiri; Lin, Lin; Shao, Meiyue
We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires halfmore » of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.« less
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
Brabec, Jiri; Lin, Lin; Shao, Meiyue; ...
2015-10-06
We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires halfmore » of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.« less
A survey of the reformulation of Australian child-oriented food products.
Savio, Stephanie; Mehta, Kaye; Udell, Tuesday; Coveney, John
2013-09-11
Childhood obesity is one of the most pressing public health challenges of the 21st century. Reformulating commonly eaten food products is a key emerging strategy to improve the food supply and help address rising rates of obesity and chronic disease. This study aimed to monitor reformulation of Australian child-oriented food products (products marketed specifically to children) from 2009-2011. In 2009, all child-oriented food products in a large supermarket in metropolitan Adelaide were identified. These baseline products were followed up in 2011 to identify products still available for sale. Nutrient content data were collected from Nutrient Information Panels in 2009 and 2011. Absolute and percentage change in nutrient content were calculated for energy, total fat, saturated fat, sugars, sodium and fibre. Data were descriptively analysed to examine reformulation in individual products, in key nutrients, within product categories and across all products. Two methods were used to assess the extent of reformulation; the first involved assessing percentage change in single nutrients over time, while the second involved a set of nutrient criteria to assess changes in overall healthiness of products over time. Of 120 products, 40 remained unchanged in nutrient composition from 2009-2011 and 80 underwent change. The proportions of positively and negatively reformulated products were similar for most nutrients surveyed, with the exception of sodium. Eighteen products (15%) were simultaneously positively and negatively reformulated for different nutrients. Using percentage change in nutrient content to assess extent of reformulation, nearly half (n = 53) of all products were at least moderately reformulated and just over one third (n = 42) were substantially reformulated. The nutrient criteria method revealed 5 products (6%) that were positively reformulated and none that had undergone negative reformulation. Positive and negative reformulation was observed to a similar extent within the sample indicating little overall improvement in healthiness of the child-oriented food supply from 2009-2011. In the absence of agreed reformulation standards, the extent of reformulation was assessed against criteria developed specifically for this project. While arbitrary in nature, these criteria were based on reasonable assessment of the meaningfulness of reformulation and change in nutrient composition. As well as highlighting nutrient composition changes in a number of food products directed to children, this study emphasises the need to develop comprehensive, targeted and standardised reformulation benchmarks to assess the extent of reformulation occurring in the food supply.
The Mine Locomotive Wireless Network Strategy Based on Successive Interference Cancellation
Wu, Liaoyuan; Han, Jianghong; Wei, Xing; Shi, Lei; Ding, Xu
2015-01-01
We consider a wireless network strategy based on successive interference cancellation (SIC) for mine locomotives. We firstly build the original mathematical model for the strategy which is a non-convex model. Then, we examine this model intensively, and figure out that there are certain regulations embedded in it. Based on these findings, we are able to reformulate the model into a new form and design a simple algorithm which can assign each locomotive with a proper transmitting scheme during the whole schedule procedure. Simulation results show that the outcomes obtained through this algorithm are improved by around 50% compared with those that do not apply the SIC technique. PMID:26569240
NASA Astrophysics Data System (ADS)
Jiang, Daijun; Li, Zhiyuan; Liu, Yikan; Yamamoto, Masahiro
2017-05-01
In this paper, we first establish a weak unique continuation property for time-fractional diffusion-advection equations. The proof is mainly based on the Laplace transform and the unique continuation properties for elliptic and parabolic equations. The result is weaker than its parabolic counterpart in the sense that we additionally impose the homogeneous boundary condition. As a direct application, we prove the uniqueness for an inverse problem on determining the spatial component in the source term by interior measurements. Numerically, we reformulate our inverse source problem as an optimization problem, and propose an iterative thresholding algorithm. Finally, several numerical experiments are presented to show the accuracy and efficiency of the algorithm.
Loeffler, Troy D; Sepehri, Aliasghar; Chen, Bin
2015-09-08
Reformulation of existing Monte Carlo algorithms used in the study of grand canonical systems has yielded massive improvements in efficiency. Here we present an energy biasing scheme designed to address targeting issues encountered in particle swap moves using sophisticated algorithms such as the Aggregation-Volume-Bias and Unbonding-Bonding methods. Specifically, this energy biasing scheme allows a particle to be inserted to (or removed from) a region that is more acceptable. As a result, this new method showed a several-fold increase in insertion/removal efficiency in addition to an accelerated rate of convergence for the thermodynamic properties of the system.
A set partitioning reformulation for the multiple-choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Voß, Stefan; Lalla-Ruiz, Eduardo
2016-05-01
The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.
Wrinkle-free design of thin membrane structures using stress-based topology optimization
NASA Astrophysics Data System (ADS)
Luo, Yangjun; Xing, Jian; Niu, Yanzhuang; Li, Ming; Kang, Zhan
2017-05-01
Thin membrane structures would experience wrinkling due to local buckling deformation when compressive stresses are induced in some regions. Using the stress criterion for membranes in wrinkled and taut states, this paper proposed a new stress-based topology optimization methodology to seek the optimal wrinkle-free design of macro-scale thin membrane structures under stretching. Based on the continuum model and linearly elastic assumption in the taut state, the optimization problem is defined as to maximize the structural stiffness under membrane area and principal stress constraints. In order to make the problem computationally tractable, the stress constraints are reformulated into equivalent ones and relaxed by a cosine-type relaxation scheme. The reformulated optimization problem is solved by a standard gradient-based algorithm with the adjoint-variable sensitivity analysis. Several examples with post-bulking simulations and experimental tests are given to demonstrate the effectiveness of the proposed optimization model for eliminating stress-related wrinkles in the novel design of thin membrane structures.
A survey of the reformulation of Australian child-oriented food products
2013-01-01
Background Childhood obesity is one of the most pressing public health challenges of the 21st century. Reformulating commonly eaten food products is a key emerging strategy to improve the food supply and help address rising rates of obesity and chronic disease. This study aimed to monitor reformulation of Australian child-oriented food products (products marketed specifically to children) from 2009–2011. Methods In 2009, all child-oriented food products in a large supermarket in metropolitan Adelaide were identified. These baseline products were followed up in 2011 to identify products still available for sale. Nutrient content data were collected from Nutrient Information Panels in 2009 and 2011. Absolute and percentage change in nutrient content were calculated for energy, total fat, saturated fat, sugars, sodium and fibre. Data were descriptively analysed to examine reformulation in individual products, in key nutrients, within product categories and across all products. Two methods were used to assess the extent of reformulation; the first involved assessing percentage change in single nutrients over time, while the second involved a set of nutrient criteria to assess changes in overall healthiness of products over time. Results Of 120 products, 40 remained unchanged in nutrient composition from 2009–2011 and 80 underwent change. The proportions of positively and negatively reformulated products were similar for most nutrients surveyed, with the exception of sodium. Eighteen products (15%) were simultaneously positively and negatively reformulated for different nutrients. Using percentage change in nutrient content to assess extent of reformulation, nearly half (n = 53) of all products were at least moderately reformulated and just over one third (n = 42) were substantially reformulated. The nutrient criteria method revealed 5 products (6%) that were positively reformulated and none that had undergone negative reformulation. Conclusion Positive and negative reformulation was observed to a similar extent within the sample indicating little overall improvement in healthiness of the child-oriented food supply from 2009–2011. In the absence of agreed reformulation standards, the extent of reformulation was assessed against criteria developed specifically for this project. While arbitrary in nature, these criteria were based on reasonable assessment of the meaningfulness of reformulation and change in nutrient composition. As well as highlighting nutrient composition changes in a number of food products directed to children, this study emphasises the need to develop comprehensive, targeted and standardised reformulation benchmarks to assess the extent of reformulation occurring in the food supply. PMID:24025190
Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.
Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui
2018-03-01
Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.
Reducing calorie sales from supermarkets - 'silent' reformulation of retailer-brand food products.
Jensen, Jørgen Dejgård; Sommer, Iben
2017-08-23
Food product reformulation is seen as one among several tools to promote healthier eating. Reformulating the recipe for a processed food, e.g. reducing the fat, sugar or salt content of the foods, or increasing the content of whole-grains, can help the consumers to pursue a healthier life style. In this study, we evaluate the effects on calorie sales of a 'silent' reformulation strategy, where a retail chain's private-label brands are reformulated to a lower energy density without making specific claims on the product. Using an ecological study design, we analyse 52 weeks' sales data - enriched with data on products' energy density - from a Danish retail chain. Sales of eight product categories were studied. Within each of these categories, specific products had been reformulated during the 52 weeks data period. Using econometric methods, we decompose the changes in calorie turnover and sales value into direct and indirect effects of product reformulation. For all considered products, the direct effect of product reformulation was a reduction in the sale of calories from the respective product categories - between 0.5 and 8.2%. In several cases, the reformulation led to indirect substitution effects that were counterproductive with regard to reducing calorie turnover. However, except in two insignificant cases, these indirect substitution effects were dominated by the direct effect of the reformulation, leading to net reductions in calorie sales between -3.1 and 7.5%. For all considered product reformulations, the reformulation had either positive, zero or very moderate negative effects on the sales value of the product category to which the reformulated product belonged. Based on these findings, 'silent' reformulation of retailer's private brands towards lower energy density seems to contribute to lowering the calorie intake in the population (although to a moderate extent) with moderate losses in retailer's sales revenues.
Coplan, Paul M; Black, Ryan A; Weber, Sarah E; Chilcoat, Howard D; Butler, Stephen F
2014-01-01
Background Reformulating opioid analgesics to deter abuse is one approach toward improving their benefit-risk balance. To assess sentiment and attempts to defeat these products among difficult-to-reach populations of prescription drug abusers, evaluation of posts on Internet forums regarding reformulated products may be useful. A reformulated version of OxyContin (extended-release oxycodone) with physicochemical properties to deter abuse presented an opportunity to evaluate posts about the reformulation in online discussions. Objective The objective of this study was to use messages on Internet forums to evaluate reactions to the introduction of reformulated OxyContin and to identify methods aimed to defeat the abuse-deterrent properties of the product. Methods Posts collected from 7 forums between January 1, 2008 and September 30, 2013 were evaluated before and after the introduction of reformulated OxyContin on August 9, 2010. A quantitative evaluation of discussion levels across the study period and a qualitative coding of post content for OxyContin and 2 comparators for the 26 month period before and after OxyContin reformulation were conducted. Product endorsement was estimated for each product before and after reformulation as the ratio of endorsing-to-discouraging posts (ERo). Post-to-preintroduction period changes in ERos (ie, ratio of ERos) for each product were also calculated. Additionally, post content related to recipes for defeating reformulated OxyContin were evaluated from August 9, 2010 through September 2013. Results Over the study period, 45,936 posts related to OxyContin, 18,685 to Vicodin (hydrocodone), and 23,863 to Dilaudid (hydromorphone) were identified. The proportion of OxyContin-related posts fluctuated between 6.35 and 8.25 posts per 1000 posts before the reformulation, increased to 10.76 in Q3 2010 when reformulated OxyContin was introduced, and decreased from 9.14 in Q4 2010 to 3.46 in Q3 2013 in the period following the reformulation. The sentiment profile for OxyContin changed following reformulation; the post-to-preintroduction change in the ERo indicated reformulated OxyContin was discouraged significantly more than the original formulation (ratio of ERos=0.43, P<.001). A total of 37 recipes for circumventing the abuse-deterrent characteristics of reformulated OxyContin were observed; 32 were deemed feasible (ie, able to abuse). The frequency of posts reporting abuse of reformulated OxyContin via these recipes was low and decreased over time. Among the 5677 posts mentioning reformulated OxyContin, 825 posts discussed recipes and 498 reported abuse of reformulated OxyContin by such recipes (41 reported injecting and 128 reported snorting). Conclusions After introduction of physicochemical properties to deter abuse, changes in discussion of OxyContin on forums occurred reflected by a reduction in discussion levels and endorsing content. Despite discussion of recipes, there is a relatively small proportion of reported abuse of reformulated OxyContin via recipes, particularly by injecting or snorting routes. Analysis of Internet discussion is a valuable tool for monitoring the impact of abuse-deterrent formulations. PMID:24800858
McNaughton, Emily C; Coplan, Paul M; Black, Ryan A; Weber, Sarah E; Chilcoat, Howard D; Butler, Stephen F
2014-05-02
Reformulating opioid analgesics to deter abuse is one approach toward improving their benefit-risk balance. To assess sentiment and attempts to defeat these products among difficult-to-reach populations of prescription drug abusers, evaluation of posts on Internet forums regarding reformulated products may be useful. A reformulated version of OxyContin (extended-release oxycodone) with physicochemical properties to deter abuse presented an opportunity to evaluate posts about the reformulation in online discussions. The objective of this study was to use messages on Internet forums to evaluate reactions to the introduction of reformulated OxyContin and to identify methods aimed to defeat the abuse-deterrent properties of the product. Posts collected from 7 forums between January 1, 2008 and September 30, 2013 were evaluated before and after the introduction of reformulated OxyContin on August 9, 2010. A quantitative evaluation of discussion levels across the study period and a qualitative coding of post content for OxyContin and 2 comparators for the 26 month period before and after OxyContin reformulation were conducted. Product endorsement was estimated for each product before and after reformulation as the ratio of endorsing-to-discouraging posts (ERo). Post-to-preintroduction period changes in ERos (ie, ratio of ERos) for each product were also calculated. Additionally, post content related to recipes for defeating reformulated OxyContin were evaluated from August 9, 2010 through September 2013. Over the study period, 45,936 posts related to OxyContin, 18,685 to Vicodin (hydrocodone), and 23,863 to Dilaudid (hydromorphone) were identified. The proportion of OxyContin-related posts fluctuated between 6.35 and 8.25 posts per 1000 posts before the reformulation, increased to 10.76 in Q3 2010 when reformulated OxyContin was introduced, and decreased from 9.14 in Q4 2010 to 3.46 in Q3 2013 in the period following the reformulation. The sentiment profile for OxyContin changed following reformulation; the post-to-preintroduction change in the ERo indicated reformulated OxyContin was discouraged significantly more than the original formulation (ratio of ERos=0.43, P<.001). A total of 37 recipes for circumventing the abuse-deterrent characteristics of reformulated OxyContin were observed; 32 were deemed feasible (ie, able to abuse). The frequency of posts reporting abuse of reformulated OxyContin via these recipes was low and decreased over time. Among the 5677 posts mentioning reformulated OxyContin, 825 posts discussed recipes and 498 reported abuse of reformulated OxyContin by such recipes (41 reported injecting and 128 reported snorting). After introduction of physicochemical properties to deter abuse, changes in discussion of OxyContin on forums occurred reflected by a reduction in discussion levels and endorsing content. Despite discussion of recipes, there is a relatively small proportion of reported abuse of reformulated OxyContin via recipes, particularly by injecting or snorting routes. Analysis of Internet discussion is a valuable tool for monitoring the impact of abuse-deterrent formulations.
Maximum Margin Clustering of Hyperspectral Data
NASA Astrophysics Data System (ADS)
Niazmardi, S.; Safari, A.; Homayouni, S.
2013-09-01
In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.
Smear correction of highly variable, frame-transfer CCD images with application to polarimetry.
Iglesias, Francisco A; Feller, Alex; Nagaraju, Krishnappa
2015-07-01
Image smear, produced by the shutterless operation of frame-transfer CCD detectors, can be detrimental for many imaging applications. Existing algorithms used to numerically remove smear do not contemplate cases where intensity levels change considerably between consecutive frame exposures. In this report, we reformulate the smearing model to include specific variations of the sensor illumination. The corresponding desmearing expression and its noise properties are also presented and demonstrated in the context of fast imaging polarimetry.
40 CFR 80.78 - Controls and prohibitions on reformulated gasoline.
Code of Federal Regulations, 2013 CFR
2013-07-01
... reformulated gasoline. 80.78 Section 80.78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.78 Controls and prohibitions on reformulated gasoline. (a) Prohibited activities. (1) No person may manufacture...
40 CFR 80.66 - Calculation of reformulated gasoline properties.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Calculation of reformulated gasoline... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.66 Calculation of reformulated gasoline properties. (a) All volume measurements required by these regulations shall be...
40 CFR 80.78 - Controls and prohibitions on reformulated gasoline.
Code of Federal Regulations, 2011 CFR
2011-07-01
... reformulated gasoline. 80.78 Section 80.78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.78 Controls and prohibitions on reformulated gasoline. (a) Prohibited activities. (1) No person may manufacture...
40 CFR 80.78 - Controls and prohibitions on reformulated gasoline.
Code of Federal Regulations, 2010 CFR
2010-07-01
... reformulated gasoline. 80.78 Section 80.78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.78 Controls and prohibitions on reformulated gasoline. (a) Prohibited activities. (1) No person may manufacture...
40 CFR 80.66 - Calculation of reformulated gasoline properties.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Calculation of reformulated gasoline... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.66 Calculation of reformulated gasoline properties. (a) All volume measurements required by these regulations shall be...
40 CFR 80.78 - Controls and prohibitions on reformulated gasoline.
Code of Federal Regulations, 2012 CFR
2012-07-01
... reformulated gasoline. 80.78 Section 80.78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.78 Controls and prohibitions on reformulated gasoline. (a) Prohibited activities. (1) No person may manufacture...
40 CFR 80.66 - Calculation of reformulated gasoline properties.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Calculation of reformulated gasoline... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.66 Calculation of reformulated gasoline properties. (a) All volume measurements required by these regulations shall be...
40 CFR 80.66 - Calculation of reformulated gasoline properties.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Calculation of reformulated gasoline... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.66 Calculation of reformulated gasoline properties. (a) All volume measurements required by these regulations shall be...
40 CFR 80.78 - Controls and prohibitions on reformulated gasoline.
Code of Federal Regulations, 2014 CFR
2014-07-01
... reformulated gasoline. 80.78 Section 80.78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.78 Controls and prohibitions on reformulated gasoline. (a) Prohibited activities. (1) No person may manufacture...
40 CFR 80.66 - Calculation of reformulated gasoline properties.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Calculation of reformulated gasoline... PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.66 Calculation of reformulated gasoline properties. (a) All volume measurements required by these regulations shall be...
NASA Technical Reports Server (NTRS)
Pindera, Marek-Jerzy; Bednarcyk, Brett A.
1997-01-01
An efficient implementation of the generalized method of cells micromechanics model is presented that allows analysis of periodic unidirectional composites characterized by repeating unit cells containing thousands of subcells. The original formulation, given in terms of Hill's strain concentration matrices that relate average subcell strains to the macroscopic strains, is reformulated in terms of the interfacial subcell tractions as the basic unknowns. This is accomplished by expressing the displacement continuity equations in terms of the stresses and then imposing the traction continuity conditions directly. The result is a mixed formulation wherein the unknown interfacial subcell traction components are related to the macroscopic strain components. Because the stress field throughout the repeating unit cell is piece-wise uniform, the imposition of traction continuity conditions directly in the displacement continuity equations, expressed in terms of stresses, substantially reduces the number of unknown subcell traction (and stress) components, and thus the size of the system of equations that must be solved. Further reduction in the size of the system of continuity equations is obtained by separating the normal and shear traction equations in those instances where the individual subcells are, at most, orthotropic. The reformulated version facilitates detailed analysis of the impact of the fiber cross-section geometry and arrangement on the response of multi-phased unidirectional composites with and without evolving damage. Comparison of execution times obtained with the original and reformulated versions of the generalized method of cells demonstrates the new version's efficiency.
NASA Astrophysics Data System (ADS)
Luo, Bin; Lin, Lin; Zhong, ShiSheng
2018-02-01
In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.
On the suitability of the connection machine for direct particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonard
1990-01-01
The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.
40 CFR 80.46 - Measurement of reformulated gasoline fuel parameters.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Measurement of reformulated gasoline... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.46 Measurement of reformulated gasoline fuel parameters. (a) Sulfur. Sulfur content of gasoline and butane must...
40 CFR 80.46 - Measurement of reformulated gasoline fuel parameters.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Measurement of reformulated gasoline... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.46 Measurement of reformulated gasoline fuel parameters. (a) Sulfur. Sulfur content of gasoline and butane must...
40 CFR 80.46 - Measurement of reformulated gasoline fuel parameters.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Measurement of reformulated gasoline... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.46 Measurement of reformulated gasoline fuel parameters. (a) Sulfur. Sulfur content of gasoline and butane must...
a Unified Matrix Polynomial Approach to Modal Identification
NASA Astrophysics Data System (ADS)
Allemang, R. J.; Brown, D. L.
1998-04-01
One important current focus of modal identification is a reformulation of modal parameter estimation algorithms into a single, consistent mathematical formulation with a corresponding set of definitions and unifying concepts. Particularly, a matrix polynomial approach is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain (ITD), eigensystem realization algorithm (ERA), rational fraction polynomial (RFP), polyreference frequency domain (PFD) and the complex mode indication function (CMIF) methods. Using this unified matrix polynomial approach (UMPA) allows a discussion of the similarities and differences of the commonly used methods. the use of least squares (LS), total least squares (TLS), double least squares (DLS) and singular value decomposition (SVD) methods is discussed in order to take advantage of redundant measurement data. Eigenvalue and SVD transformation methods are utilized to reduce the effective size of the resulting eigenvalue-eigenvector problem as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mace, Gerald G.
What has made the ASR program unique is the amount of information that is available. The suite of recently deployed instruments significantly expands the scope of the program (Mather and Voyles, 2013). The breadth of this information allows us to pose sophisticated process-level questions. Our ASR project, now entering its third year, has been about developing algorithms that use this information in ways that fully exploit the new capacity of the ARM data streams. Using optimal estimation (OE) and Markov Chain Monte Carlo (MCMC) inversion techniques, we have developed methodologies that allow us to use multiple radar frequency Doppler spectramore » along with lidar and passive constraints where data streams can be added or subtracted efficiently and algorithms can be reformulated for various combinations of hydrometeors by exchanging sets of empirical coefficients. These methodologies have been applied to boundary layer clouds, mixed phase snow cloud systems, and cirrus.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nallasivam, Ulaganathan; Shah, Vishesh H.; Shenvi, Anirudh A.
We present a general Global Minimization Algorithm (GMA) to identify basic or thermally coupled distillation configurations that require the least vapor duty under minimum reflux conditions for separating any ideal or near-ideal multicomponent mixture into a desired number of product streams. In this algorithm, global optimality is guaranteed by modeling the system using Underwood equations and reformulating the resulting constraints to bilinear inequalities. The speed of convergence to the globally optimal solution is increased by using appropriate feasibility and optimality based variable-range reduction techniques and by developing valid inequalities. As a result, the GMA can be coupled with already developedmore » techniques that enumerate basic and thermally coupled distillation configurations, to provide for the first time, a global optimization based rank-list of distillation configurations.« less
Hesselmann, Andreas; Görling, Andreas
2011-01-21
A recently introduced time-dependent exact-exchange (TDEXX) method, i.e., a response method based on time-dependent density-functional theory that treats the frequency-dependent exchange kernel exactly, is reformulated. In the reformulated version of the TDEXX method electronic excitation energies can be calculated by solving a linear generalized eigenvalue problem while in the original version of the TDEXX method a laborious frequency iteration is required in the calculation of each excitation energy. The lowest eigenvalues of the new TDEXX eigenvalue equation corresponding to the lowest excitation energies can be efficiently obtained by, e.g., a version of the Davidson algorithm appropriate for generalized eigenvalue problems. Alternatively, with the help of a series expansion of the new TDEXX eigenvalue equation, standard eigensolvers for large regular eigenvalue problems, e.g., the standard Davidson algorithm, can be used to efficiently calculate the lowest excitation energies. With the help of the series expansion as well, the relation between the TDEXX method and time-dependent Hartree-Fock is analyzed. Several ways to take into account correlation in addition to the exact treatment of exchange in the TDEXX method are discussed, e.g., a scaling of the Kohn-Sham eigenvalues, the inclusion of (semi)local approximate correlation potentials, or hybrids of the exact-exchange kernel with kernels within the adiabatic local density approximation. The lowest lying excitations of the molecules ethylene, acetaldehyde, and pyridine are considered as examples.
Empowerment: reformulation of a non-Rogerian concept.
Crawford Shearer, Nelma B; Reed, Pamela G
2004-07-01
The authors present a reformulation of empowerment based upon historical and current perspectives of empowerment and a synthesis of existing literature and Rogerian thought. Reformulation of non-Rogerian concepts familiar to nurses is proposed as a strategy to accelerate the mainstreaming of Rogerian thought into nursing practice and research. The reformulation of empowerment as a participatory process of well-being inherent among human beings may provide nurses with new insights for practice. This paper may also serve as a model for reformulating other non-Rogerian concepts and theories for wider dissemination across the discipline.
Reformulation as an Integrated Approach of Four Disciplines: A Qualitative Study with Food Companies
van Gunst, Annelies; Roodenburg, Annet J. C.; Steenhuis, Ingrid H. M.
2018-01-01
In 2014, the Dutch government agreed with the food sector to lower salt, sugar, saturated fat and energy in foods. To reformulate, an integrated approach of four disciplines (Nutrition & Health, Food Technology, Legislation, and Consumer Perspectives) is important for food companies (Framework for Reformulation). The objective of this study was to determine whether this framework accurately reflects reformulation processes in food companies. Seventeen Dutch food companies in the bakery, meat and convenience sector were interviewed with a semi-structured topic list. Interviews were transcribed, coded and analysed. Interviews illustrated that there were opportunities to lower salt, sugar and saturated fat (Nutrition & Health). However, there were barriers to replacing the functionality of these ingredients (Food Technology). Most companies would like the government to push reformulation more (Legislation). Traditional meat products and luxury sweet bakery products were considered less suitable for reformulation (Consumer Perspectives). In addition, the reduction of E-numbers was considered important. The important role of the retailer is stressed by the respondents. In conclusion, all four disciplines are important in the reformulation processes in food companies. Reformulation does not only mean the reduction of salt, saturated fat and sugar for companies, but also the reduction of E-numbers. PMID:29677158
van Gunst, Annelies; Roodenburg, Annet J C; Steenhuis, Ingrid H M
2018-04-20
In 2014, the Dutch government agreed with the food sector to lower salt, sugar, saturated fat and energy in foods. To reformulate, an integrated approach of four disciplines (Nutrition & Health, Food Technology, Legislation, and Consumer Perspectives) is important for food companies (Framework for Reformulation). The objective of this study was to determine whether this framework accurately reflects reformulation processes in food companies. Seventeen Dutch food companies in the bakery, meat and convenience sector were interviewed with a semi-structured topic list. Interviews were transcribed, coded and analysed. Interviews illustrated that there were opportunities to lower salt, sugar and saturated fat (Nutrition & Health). However, there were barriers to replacing the functionality of these ingredients (Food Technology). Most companies would like the government to push reformulation more (Legislation). Traditional meat products and luxury sweet bakery products were considered less suitable for reformulation (Consumer Perspectives). In addition, the reduction of E-numbers was considered important. The important role of the retailer is stressed by the respondents. In conclusion, all four disciplines are important in the reformulation processes in food companies. Reformulation does not only mean the reduction of salt, saturated fat and sugar for companies, but also the reduction of E-numbers.
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.
2012-04-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
Towards an Effective Theory of Reformulation. Part 1; Semantics
NASA Technical Reports Server (NTRS)
Benjamin, D. Paul
1992-01-01
This paper describes an investigation into the structure of representations of sets of actions, utilizing semigroup theory. The goals of this project are twofold: to shed light on the relationship between tasks and representations, leading to a classification of tasks according to the representations they admit; and to develop techniques for automatically transforming representations so as to improve problem-solving performance. A method is demonstrated for automatically generating serial algorithms for representations whose actions form a finite group. This method is then extended to representations whose actions form a finite inverse semigroup.
Ares, Gastón; Aschemann-Witzel, Jessica; Curutchet, María Rosa; Antúnez, Lucía; Machín, Leandro; Vidal, Leticia; Giménez, Ana
2018-05-01
The reformulation of the food products available in the marketplace to improve their nutritional quality has been identified as one of the most cost-effective policies for controlling the global obesity pandemic. Front-of-pack (FOP) nutrition labelling is one of the strategies that has been suggested to encourage the food industry to reformulate their products. However, the extent to which certain FOP labels can encourage product reformulation is dependent on consumer reaction. The aim of the present work was to assess consumers' perception towards product reformulation in the context of the implementation of nutritional warnings, an interpretive FOP nutrition labelling scheme. Three product categories were selected as target products: bread, cream cheese and yogurt, each associated with high content of one target nutrient. For each category, six packages were designed using a 3 × 2 experimental design with the following variables: product version (regular, nutrient-reduced and nutrient-free) and brand (market leader and non-market leader). A total 306 Uruguayan participants completed a choice experiment with 18 choice sets. Reformulated products without nutritional warnings were preferred by participants compared to regular products with nutritional warnings. No apparent preference for products reformulated into nutrient-reduced or nutrient-free product versions was found, although differences depended on the product category and the specific reformulation strategy. Preference for reformulated products without nutritional warnings was more pronounced for non-market leaders. Results from the present work suggest that reformulation of foods in the context of the implementation of nutritional warnings holds potential to encourage consumers to make more healthful food choices and to cause a reduction of their intake of nutrients associated with non-communicable diseases. Copyright © 2018 Elsevier Ltd. All rights reserved.
Uses of nutrient profiling to address public health needs: from regulation to reformulation.
Drewnowski, Adam
2017-08-01
Nutrient profiling (NP) models rate the nutritional quality of individual foods, based on their nutrient composition. Their goal is to identify nutrient-rich foods, generally defined as those that contain more nutrients than calories and are low in fat, sugar and salt. NP models have provided the scientific basis for evaluating nutrition and health claims and regulating marketing and advertising to children. The food industry has used NP methods to reformulate product portfolios. To help define what we mean by healthy foods, NP models need to be based on published nutrition standards, mandated serving sizes and open-source nutrient composition databases. Specifically, the development and testing of NP models for public health should follow the seven decision steps outlined by the European Food Safety Authority. Consistent with this scheme, the nutrient-rich food (NRF) family of indices was based on a variable number of qualifying nutrients (from six to fifteen) and on three disqualifying nutrients (saturated fat, added sugar, sodium). The selection of nutrients and daily reference amounts followed nutrient standards for the USA. The base of calculation was 418·4 kJ (100 kcal), in preference to 100 g, or serving sizes. The NRF algorithms, based on unweighted sums of percent daily values, subtracted negative (LIM) from positive (NRn) subscores (NRn - LIM). NRF model performance was tested with respect to energy density and independent measures of a healthy diet. Whereas past uses of NP modelling have been regulatory or educational, voluntary product reformulation by the food industry may have most impact on public health.
Nutrient profiling for product reformulation: public health impact and benefits for the consumer.
Lehmann, Undine; Charles, Véronique Rheiner; Vlassopoulos, Antonis; Masset, Gabriel; Spieldenner, Jörg
2017-08-01
The food industry holds great potential for driving consumers to adopt healthy food choices as (re)formulation of foods can improve the nutritional quality of these foods. Reformulation has been identified as a cost-effective intervention in addressing non-communicable diseases as it does not require significant alterations of consumer behaviour and dietary habits. Nutrient profiling (NP), the science of categorizing foods based on their nutrient composition, has emerged as an essential tool and is implemented through many different profiling systems to guide reformulation and other nutrition policies. NP systems should be adapted to their specific purposes as it is not possible to design one system that can equally address all policies and purposes, e.g. reformulation and labelling. The present paper discusses some of the key principles and specificities that underlie a NP system designed for reformulation with the example of the Nestlé nutritional profiling system. Furthermore, the impact of reformulation at the level of the food product, dietary intakes and public health are reviewed. Several studies showed that food and beverage reformulation, guided by a NP system, may be effective in improving population nutritional intakes and thereby its health status. In order to achieve its maximum potential and modify the food environment in a beneficial manner, reformulation should be implemented by the entire food sector. Multi-stakeholder partnerships including governments, food industry, retailers and consumer associations that will state concrete time-bound objectives accompanied by an independent monitoring system are the potential solution.
Structural damage identification using an enhanced thermal exchange optimization algorithm
NASA Astrophysics Data System (ADS)
Kaveh, A.; Dadras, A.
2018-03-01
The recently developed optimization algorithm-the so-called thermal exchange optimization (TEO) algorithm-is enhanced and applied to a damage detection problem. An offline parameter tuning approach is utilized to set the internal parameters of the TEO, resulting in the enhanced heat transfer optimization (ETEO) algorithm. The damage detection problem is defined as an inverse problem, and ETEO is applied to a wide range of structures. Several scenarios with noise and noise-free modal data are tested and the locations and extents of damages are identified with good accuracy.
NASA Astrophysics Data System (ADS)
Zorila, Alexandru; Stratan, Aurel; Nemes, George
2018-01-01
We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.
State-of-charge estimation in lithium-ion batteries: A particle filter approach
NASA Astrophysics Data System (ADS)
Tulsyan, Aditya; Tsai, Yiting; Gopaluni, R. Bhushan; Braatz, Richard D.
2016-11-01
The dynamics of lithium-ion batteries are complex and are often approximated by models consisting of partial differential equations (PDEs) relating the internal ionic concentrations and potentials. The Pseudo two-dimensional model (P2D) is one model that performs sufficiently accurately under various operating conditions and battery chemistries. Despite its widespread use for prediction, this model is too complex for standard estimation and control applications. This article presents an original algorithm for state-of-charge estimation using the P2D model. Partial differential equations are discretized using implicit stable algorithms and reformulated into a nonlinear state-space model. This discrete, high-dimensional model (consisting of tens to hundreds of states) contains implicit, nonlinear algebraic equations. The uncertainty in the model is characterized by additive Gaussian noise. By exploiting the special structure of the pseudo two-dimensional model, a novel particle filter algorithm that sweeps in time and spatial coordinates independently is developed. This algorithm circumvents the degeneracy problems associated with high-dimensional state estimation and avoids the repetitive solution of implicit equations by defining a 'tether' particle. The approach is illustrated through extensive simulations.
Integrated Hardware and Software for No-Loss Computing
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
When an algorithm is distributed across multiple threads executing on many distinct processors, a loss of one of those threads or processors can potentially result in the total loss of all the incremental results up to that point. When implementation is massively hardware distributed, then the probability of a hardware failure during the course of a long execution is potentially high. Traditionally, this problem has been addressed by establishing checkpoints where the current state of some or part of the execution is saved. Then in the event of a failure, this state information can be used to recompute that point in the execution and resume the computation from that point. A serious problem arises when one distributes a problem across multiple threads and physical processors is that one increases the likelihood of the algorithm failing due to no fault of the scientist but as a result of hardware faults coupled with operating system problems. With good reason, scientists expect their computing tools to serve them and not the other way around. What is novel here is a unique combination of hardware and software that reformulates an application into monolithic structure that can be monitored in real-time and dynamically reconfigured in the event of a failure. This unique reformulation of hardware and software will provide advanced aeronautical technologies to meet the challenges of next-generation systems in aviation, for civilian and scientific purposes, in our atmosphere and in atmospheres of other worlds. In particular, with respect to NASA s manned flight to Mars, this technology addresses the critical requirements for improving safety and increasing reliability of manned spacecraft.
Product reformulation and nutritional improvements after new competitive food standards in schools.
Jahn, Jaquelyn L; Cohen, Juliana Fw; Gorski-Findling, Mary T; Hoffman, Jessica A; Rosenfeld, Lindsay; Chaffee, Ruth; Smith, Lauren; Rimm, Eric B
2018-04-01
In 2012, Massachusetts enacted school competitive food and beverage standards similar to national Smart Snacks. These standards aim to improve the nutritional quality of competitive snacks. It was previously demonstrated that a majority of foods and beverages were compliant with the standards, but it was unknown whether food manufacturers reformulated products in response to the standards. The present study assessed whether products were reformulated after standards were implemented; the availability of reformulated products outside schools; and whether compliance with the standards improved the nutrient composition of competitive snacks. An observational cohort study documenting all competitive snacks sold before (2012) and after (2013 and 2014) the standards were implemented. The sample included thirty-six school districts with both a middle and high school. After 2012, energy, saturated fat, Na and sugar decreased and fibre increased among all competitive foods. By 2013, 8 % of foods were reformulated, as were an additional 9 % by 2014. Nearly 15 % of reformulated foods were look-alike products that could not be purchased at supermarkets. Energy and Na in beverages decreased after 2012, in part facilitated by smaller package sizes. Massachusetts' law was effective in improving the nutritional content of snacks and product reformulation helped schools adhere to the law. This suggests fully implementing Smart Snacks standards may similarly improve the foods available in schools nationally. However, only some healthier reformulated foods were available outside schools.
Sequential structural damage diagnosis algorithm using a change point detection method
NASA Astrophysics Data System (ADS)
Noh, H.; Rajagopal, R.; Kiremidjian, A. S.
2013-11-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
MHD Turbulence, div B = 0 and Lattice Boltzmann Simulations
NASA Astrophysics Data System (ADS)
Phillips, Nate; Keating, Brian; Vahala, George; Vahala, Linda
2006-10-01
The question of div B = 0 in MHD simulations is a crucial issue. Here we consider lattice Boltzmann simulations for MHD (LB-MHD). One introduces a scalar distribution function for the velocity field and a vector distribution function for the magnetic field. This asymmetry is due to the different symmetries in the tensors arising in the time evolution of these fields. The simple algorithm of streaming and local collisional relaxation is ideally parallelized and vectorized -- leading to the best sustained performance/PE of any code run on the Earth Simulator. By reformulating the BGK collision term, a simple implicit algorithm can be immediately transformed into an explicit algorithm that permits simulations at quite low viscosity and resistivity. However the div B is not an imposed constraint. Currently we are examining a new formulations of LB-MHD that impose the div B constraint -- either through an entropic like formulation or by introducing forcing terms into the momentum equations and permitting simpler forms of relaxation distributions.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng
2016-09-20
A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng
2016-01-01
A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm. PMID:27657069
Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.
Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews
2015-03-01
This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-03
... Request; Comment Request; Reformulated Gasoline Commingling Provisions AGENCY: Environmental Protection... information collection request (ICR), ``Reformulated Gasoline Commingling Provisions'' (EPA ICR No.2228.04.... Abstract: EPA would like to continue collecting notifications from gasoline retailers and wholesale...
Acceleration of the Smith-Waterman algorithm using single and multiple graphics processors
NASA Astrophysics Data System (ADS)
Khajeh-Saeed, Ali; Poole, Stephen; Blair Perot, J.
2010-06-01
Finding regions of similarity between two very long data streams is a computationally intensive problem referred to as sequence alignment. Alignment algorithms must allow for imperfect sequence matching with different starting locations and some gaps and errors between the two data sequences. Perhaps the most well known application of sequence matching is the testing of DNA or protein sequences against genome databases. The Smith-Waterman algorithm is a method for precisely characterizing how well two sequences can be aligned and for determining the optimal alignment of those two sequences. Like many applications in computational science, the Smith-Waterman algorithm is constrained by the memory access speed and can be accelerated significantly by using graphics processors (GPUs) as the compute engine. In this work we show that effective use of the GPU requires a novel reformulation of the Smith-Waterman algorithm. The performance of this new version of the algorithm is demonstrated using the SSCA#1 (Bioinformatics) benchmark running on one GPU and on up to four GPUs executing in parallel. The results indicate that for large problems a single GPU is up to 45 times faster than a CPU for this application, and the parallel implementation shows linear speed up on up to 4 GPUs.
Food and beverage product reformulation as a corporate political strategy.
Scott, C; Hawkins, B; Knai, C
2017-01-01
Product reformulation- the process of altering a food or beverage product's recipe or composition to improve the product's health profile - is a prominent response to the obesity and noncommunicable disease epidemics in the U.S. To date, reformulation in the U.S. has been largely voluntary and initiated by actors within the food and beverage industry. Similar voluntary efforts by the tobacco and alcohol industry have been considered to be a mechanism of corporate political strategy to shape public health policies and decisions to suit commercial needs. We propose a taxonomy of food and beverage industry corporate political strategies that builds on the existing literature. We then analyzed the industry's responses to a 2014 U.S. government consultation on product reformulation, run as part of the process to define the 2015 Dietary Guidelines for Americans. We qualitatively coded the industry's responses for predominant narratives and framings around reformulation using a purposely-designed coding framework, and compared the results to the taxonomy. The food and beverage industry in the United States used a highly similar narrative around voluntary product reformulation in their consultation responses: that reformulation is "part of the solution" to obesity and NCDs, even though their products or industry are not large contributors to the problem, and that progress has been made despite reformulation posing significant technical challenges. This narrative and the frames used in the submissions illustrate the four categories of the taxonomy: participation in the policy process, influencing the framing of the nutrition policy debate, creating partnerships, and influencing the interpretation of evidence. These strategic uses of reformulation align with previous research on food and beverage corporate political strategy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ultra-processed foods and the limits of product reformulation.
Scrinis, Gyorgy; Monteiro, Carlos Augusto
2018-01-01
The nutritional reformulation of processed food and beverage products has been promoted as an important means of addressing the nutritional imbalances in contemporary dietary patterns. The focus of most reformulation policies is the reduction in quantities of nutrients-to-limit - Na, free sugars, SFA, trans-fatty acids and total energy. The present commentary examines the limitations of what we refer to as 'nutrients-to-limit reformulation' policies and practices, particularly when applied to ultra-processed foods and drink products. Beyond these nutrients-to-limit, there are a range of other potentially harmful processed and industrially produced ingredients used in the production of ultra-processed products that are not usually removed during reformulation. The sources of nutrients-to-limit in these products may be replaced with other highly processed ingredients and additives, rather than with whole or minimally processed foods. Reformulation policies may also legitimise current levels of consumption of ultra-processed products in high-income countries and increased levels of consumption in emerging markets in the global South.
Algebraic criteria for positive realness relative to the unit circle.
NASA Technical Reports Server (NTRS)
Siljak, D. D.
1973-01-01
A definition is presented of the circle positive realness of real rational functions relative to the unit circle in the complex variable plane. The problem of testing this kind of positive reality is reduced to the algebraic problem of determining the distribution of zeros of a real polynomial with respect to and on the unit circle. Such reformulation of the problem avoids the search for explicit information about imaginary poles of rational functions. The stated algebraic problem is solved by applying the polynomial criteria of Marden (1966) and Jury (1964), and a completely recursive algorithm for circle positive realness is obtained.
Damage identification of a TLP floating wind turbine by meta-heuristic algorithms
NASA Astrophysics Data System (ADS)
Ettefagh, M. M.
2015-12-01
Damage identification of the offshore floating wind turbine by vibration/dynamic signals is one of the important and new research fields in the Structural Health Monitoring (SHM). In this paper a new damage identification method is proposed based on meta-heuristic algorithms using the dynamic response of the TLP (Tension-Leg Platform) floating wind turbine structure. The Genetic Algorithms (GA), Artificial Immune System (AIS), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC) are chosen for minimizing the object function, defined properly for damage identification purpose. In addition to studying the capability of mentioned algorithms in correctly identifying the damage, the effect of the response type on the results of identification is studied. Also, the results of proposed damage identification are investigated with considering possible uncertainties of the structure. Finally, for evaluating the proposed method in real condition, a 1/100 scaled experimental setup of TLP Floating Wind Turbine (TLPFWT) is provided in a laboratory scale and the proposed damage identification method is applied to the scaled turbine.
Reformulation as a Measure of Student Expression in Classroom Interaction.
ERIC Educational Resources Information Center
Dobson, James J.
1995-01-01
Investigates teacher reformulation of student talk in order to determine the manner in which teachers affect student meaning and expression. Findings indicate that reformulation is a device used by teachers to control classroom dialog and that teachers disproportionately perform the language functions most commonly associated with higher-order…
NASA Technical Reports Server (NTRS)
Arnold, Steven M; Bednarcyk, Brett; Aboydi, Jacob
2004-01-01
The High-Fidelity Generalized Method of Cells (HFGMC) micromechanics model has recently been reformulated by Bansal and Pindera (in the context of elastic phases with perfect bonding) to maximize its computational efficiency. This reformulated version of HFGMC has now been extended to include both inelastic phases and imperfect fiber-matrix bonding. The present paper presents an overview of the HFGMC theory in both its original and reformulated forms and a comparison of the results of the two implementations. The objective is to establish the correlation between the two HFGMC formulations and document the improved efficiency offered by the reformulation. The results compare the macro and micro scale predictions of the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) versions of both formulations into the inelastic regime, and, in the case of the discontinuous reinforcement version, with both perfect and weak interfacial bonding. The results demonstrate that identical predictions are obtained using either the original or reformulated implementations of HFGMC aside from small numerical differences in the inelastic regime due to the different implementation schemes used for the inelastic terms present in the two formulations. Finally, a direct comparison of execution times is presented for the original formulation and reformulation code implementations. It is shown that as the discretization employed in representing the composite repeating unit cell becomes increasingly refined (requiring a larger number of sub-volumes), the reformulated implementation becomes significantly (approximately an order of magnitude at best) more computationally efficient in both the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) cases.
How Do Children Reformulate Their Search Queries?
ERIC Educational Resources Information Center
Rutter, Sophie; Ford, Nigel; Clough, Paul
2015-01-01
Introduction: This paper investigates techniques used by children in year 4 (age eight to nine) of a UK primary school to reformulate their queries, and how they use information retrieval systems to support query reformulation. Method: An in-depth study analysing the interactions of twelve children carrying out search tasks in a primary school…
Nietzsche contra "Self-Reformulation"
ERIC Educational Resources Information Center
Fennell, J.
2005-01-01
Not only do the writings of Nietzsche--early and late--fail to support the pedagogy of self-reformulation, this doctrine embodies what for him is worst in man and would destroy that which is higher. The pedagogy of self-reformulation is also incoherent. In contrast, Nietzsche offers a fruitful and comprehensive theory of education that, while…
Reformulation of Rothermel's wildland fire behaviour model for heterogeneous fuelbeds.
David V. Sandberg; Cynthia L. Riccardi; Mark D. Schaaf
2007-01-01
Abstract: The Fuel Characteristic Classification System (FCCS) includes equations that calculate energy release and one-dimensional spread rate in quasi-steady-state fires in heterogeneous but spatially uniform wildland fuelbeds, using a reformulation of the widely used Rothermel fire spread model. This reformulation provides an automated means to predict fire behavior...
21 CFR 106.120 - New formulations and reformulations.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false New formulations and reformulations. 106.120... § 106.120 New formulations and reformulations. (a) Information required by section 412(b)(2) and (3) of... manufacturer and that has left an establishment subject to the control of the manufacturer may not provide the...
21 CFR 106.120 - New formulations and reformulations.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 2 2011-04-01 2011-04-01 false New formulations and reformulations. 106.120... § 106.120 New formulations and reformulations. (a) Information required by section 412(b)(2) and (3) of... manufacturer and that has left an establishment subject to the control of the manufacturer may not provide the...
Comparison of Vocal Vibration-Dose Measures for Potential-Damage Risk Criteria
Hunter, Eric J.
2015-01-01
Purpose Schoolteachers have become a benchmark population for the study of occupational voice use. A decade of vibration-dose studies on the teacher population allows a comparison to be made between specific dose measures for eventual assessment of damage risk. Method Vibration dosimetry is reformulated with the inclusion of collision stress. Two methods of estimating amplitude of vocal-fold vibration are compared to capture variations in vocal intensity. Energy loss from collision is added to the energy-dissipation dose. An equal-energy-dissipation criterion is defined and used on the teacher corpus as a potential-damage risk criterion. Results Comparison of time-, cycle-, distance-, and energy-dose calculations for 57 teachers reveals a progression in information content in the ability to capture variations in duration, speaking pitch, and vocal intensity. The energy-dissipation dose carries the greatest promise in capturing excessive tissue stress and collision but also the greatest liability, due to uncertainty in parameters. Cycle dose is least correlated with the other doses. Conclusion As a first guide to damage risk in excessive voice use, the equal-energy-dissipation dose criterion can be used to structure trade-off relations between loudness, adduction, and duration of speech. PMID:26172434
In-context query reformulation for failing SPARQL queries
NASA Astrophysics Data System (ADS)
Viswanathan, Amar; Michaelis, James R.; Cassidy, Taylor; de Mel, Geeth; Hendler, James
2017-05-01
Knowledge bases for decision support systems are growing increasingly complex, through continued advances in data ingest and management approaches. However, humans do not possess the cognitive capabilities to retain a bird's-eyeview of such knowledge bases, and may end up issuing unsatisfiable queries to such systems. This work focuses on the implementation of a query reformulation approach for graph-based knowledge bases, specifically designed to support the Resource Description Framework (RDF). The reformulation approach presented is instance-and schema-aware. Thus, in contrast to relaxation techniques found in the state-of-the-art, the presented approach produces in-context query reformulation.
A Convex Formulation for Learning a Shared Predictive Structure from Multiple Tasks
Chen, Jianhui; Tang, Lei; Liu, Jun; Ye, Jieping
2013-01-01
In this paper, we consider the problem of learning from multiple related tasks for improved generalization performance by extracting their shared structures. The alternating structure optimization (ASO) algorithm, which couples all tasks using a shared feature representation, has been successfully applied in various multitask learning problems. However, ASO is nonconvex and the alternating algorithm only finds a local solution. We first present an improved ASO formulation (iASO) for multitask learning based on a new regularizer. We then convert iASO, a nonconvex formulation, into a relaxed convex one (rASO). Interestingly, our theoretical analysis reveals that rASO finds a globally optimal solution to its nonconvex counterpart iASO under certain conditions. rASO can be equivalently reformulated as a semidefinite program (SDP), which is, however, not scalable to large datasets. We propose to employ the block coordinate descent (BCD) method and the accelerated projected gradient (APG) algorithm separately to find the globally optimal solution to rASO; we also develop efficient algorithms for solving the key subproblems involved in BCD and APG. The experiments on the Yahoo webpages datasets and the Drosophila gene expression pattern images datasets demonstrate the effectiveness and efficiency of the proposed algorithms and confirm our theoretical analysis. PMID:23520249
NASA Astrophysics Data System (ADS)
Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang
2016-08-01
Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.
Efficient field-theoretic simulation of polymer solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106
2014-12-14
We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less
Process for conversion of lignin to reformulated hydrocarbon gasoline
Shabtai, Joseph S.; Zmierczak, Wlodzimierz W.; Chornet, Esteban
1999-09-28
A process for converting lignin into high-quality reformulated hydrocarbon gasoline compositions in high yields is disclosed. The process is a two-stage, catalytic reaction process that produces a reformulated hydrocarbon gasoline product with a controlled amount of aromatics. In the first stage, a lignin material is subjected to a base-catalyzed depolymerization reaction in the presence of a supercritical alcohol as a reaction medium, to thereby produce a depolymerized lignin product. In the second stage, the depolymerized lignin product is subjected to a sequential two-step hydroprocessing reaction to produce a reformulated hydrocarbon gasoline product. In the first hydroprocessing step, the depolymerized lignin is contacted with a hydrodeoxygenation catalyst to produce a hydrodeoxygenated intermediate product. In the second hydroprocessing step, the hydrodeoxygenated intermediate product is contacted with a hydrocracking/ring hydrogenation catalyst to produce the reformulated hydrocarbon gasoline product which includes various desirable naphthenic and paraffinic compounds.
Autoregressive statistical pattern recognition algorithms for damage detection in civil structures
NASA Astrophysics Data System (ADS)
Yao, Ruigen; Pakzad, Shamim N.
2012-08-01
Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.
ERIC Educational Resources Information Center
Mackinnon, Sean P.; Sherry, Simon B.; Graham, Aislin R.; Stewart, Sherry H.; Sherry, Dayna L.; Allen, Stephanie L.; Fitzpatrick, Skye; McGrath, Daniel S.
2011-01-01
The perfectionism model of binge eating (PMOBE) is an integrative model explaining why perfectionism is related to binge eating. This study reformulates and tests the PMOBE, with a focus on addressing limitations observed in the perfectionism and binge-eating literature. In the reformulated PMOBE, concern over mistakes is seen as a destructive…
Applying a Consumer Behavior Lens to Salt Reduction Initiatives.
Regan, Áine; Kent, Monique Potvin; Raats, Monique M; McConnon, Áine; Wall, Patrick; Dubois, Lise
2017-08-18
Reformulation of food products to reduce salt content has been a central strategy for achieving population level salt reduction. In this paper, we reflect on current reformulation strategies and consider how consumer behavior determines the ultimate success of these strategies. We consider the merits of adopting a 'health by stealth', silent approach to reformulation compared to implementing a communications strategy which draws on labeling initiatives in tandem with reformulation efforts. We end this paper by calling for a multi-actor approach which utilizes co-design, participatory tools to facilitate the involvement of all stakeholders, including, and especially, consumers, in making decisions around how best to achieve population-level salt reduction.
An assessment of the potential health impacts of food reformulation.
Leroy, P; Réquillart, V; Soler, L-G; Enderli, G
2016-06-01
Policies focused on food quality are intended to facilitate healthy choices by consumers, even those who are not fully informed about the links between food consumption and health. The goal of this paper is to evaluate the potential impact of such a food reformulation scenario on health outcomes. We first created reformulation scenarios adapted to the French characteristics of foods. After computing the changes in the nutrient intakes of representative consumers, we determined the health effects of these changes. To do so, we used the DIETRON health assessment model, which calculates the number of deaths avoided by changes in food and nutrient intakes. Depending on the reformulation scenario, the total impact of reformulation varies between 2408 and 3597 avoided deaths per year, which amounts to a 3.7-5.5% reduction in mortality linked to diseases considered in the DIETRON model. The impacts are much higher for men than for women and much higher for low-income categories than for high-income categories. These differences result from the differences in consumption patterns and initial disease prevalence among the various income categories. Even without any changes in consumers' behaviors, realistic food reformulation may have significant health outcomes.
Detection of insect damage in almonds
NASA Astrophysics Data System (ADS)
Kim, Soowon; Schatzki, Thomas F.
1999-01-01
Pinhole insect damage in natural almonds is very difficult to detect on-line. Further, evidence exists relating insect damage to aflatoxin contamination. Hence, for quality and health reasons, methods to detect and remove such damaged nuts are of great importance in this study, we explored the possibility of using x-ray imaging to detect pinhole damage in almonds by insects. X-ray film images of about 2000 almonds and x-ray linescan images of only 522 pinhole damaged almonds were obtained. The pinhole damaged region appeared slightly darker than non-damaged region in x-ray negative images. A machine recognition algorithm was developed to detect these darker regions. The algorithm used the first order and the second order information to identify the damaged region. To reduce the possibility of false positive results due to germ region in high resolution images, germ detection and removal routines were also included. With film images, the algorithm showed approximately an 81 percent correct recognition ratio with only 1 percent false positives whereas line scan images correctly recognized 65 percent of pinholes with about 9 percent false positives. The algorithms was very fast and efficient requiring only minimal computation time. If implemented on line, theoretical throughput of this recognition system would be 66 nuts/second.
Algorithms for the computation of solutions of the Ornstein-Zernike equation.
Peplow, A T; Beardmore, R E; Bresme, F
2006-10-01
We introduce a robust and efficient methodology to solve the Ornstein-Zernike integral equation using the pseudoarc length (PAL) continuation method that reformulates the integral equation in an equivalent but nonstandard form. This enables the computation of solutions in regions where the compressibility experiences large changes or where the existence of multiple solutions and so-called branch points prevents Newton's method from converging. We illustrate the use of the algorithm with a difficult problem that arises in the numerical solution of integral equations, namely the evaluation of the so-called no-solution line of the Ornstein-Zernike hypernetted chain (HNC) integral equation for the Lennard-Jones potential. We are able to use the PAL algorithm to solve the integral equation along this line and to connect physical and nonphysical solution branches (both isotherms and isochores) where appropriate. We also show that PAL continuation can compute solutions within the no-solution region that cannot be computed when Newton and Picard methods are applied directly to the integral equation. While many solutions that we find are new, some correspond to states with negative compressibility and consequently are not physical.
Joint Chance-Constrained Dynamic Programming
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob
2012-01-01
This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.
Comparison of penalty functions on a penalty approach to mixed-integer optimization
NASA Astrophysics Data System (ADS)
Francisco, Rogério B.; Costa, M. Fernanda P.; Rocha, Ana Maria A. C.; Fernandes, Edite M. G. P.
2016-06-01
In this paper, we present a comparative study involving several penalty functions that can be used in a penalty approach for globally solving bound mixed-integer nonlinear programming (bMIMLP) problems. The penalty approach relies on a continuous reformulation of the bMINLP problem by adding a particular penalty term to the objective function. A penalty function based on the `erf' function is proposed. The continuous nonlinear optimization problems are sequentially solved by the population-based firefly algorithm. Preliminary numerical experiments are carried out in order to analyze the quality of the produced solutions, when compared with other penalty functions available in the literature.
NASA Astrophysics Data System (ADS)
Tang, Peipei; Wang, Chengjing; Dai, Xiaoxia
2016-04-01
In this paper, we propose a majorized Newton-CG augmented Lagrangian-based finite element method for 3D elastic frictionless contact problems. In this scheme, we discretize the restoration problem via the finite element method and reformulate it to a constrained optimization problem. Then we apply the majorized Newton-CG augmented Lagrangian method to solve the optimization problem, which is very suitable for the ill-conditioned case. Numerical results demonstrate that the proposed method is a very efficient algorithm for various large-scale 3D restorations of geological models, especially for the restoration of geological models with complicated faults.
Singh, Jai
2013-01-01
The objective of this study was a thorough reconsideration, within the framework of Newtonian mechanics and work-energy relationships, of the empirically interpreted relationships employed within the CRASH3 damage analysis algorithm in regards to linearity between barrier equivalent velocity (BEV) or peak collision force magnitude and residual damage depth. The CRASH3 damage analysis algorithm was considered, first in terms of the cases of collisions that produced no residual damage, in order to properly explain the damage onset speed and crush resistance terms. Under the modeling constraints of the collision partners representing a closed system and the a priori assumption of linearity between BEV or peak collision force magnitude and residual damage depth, the equations for the sole realistic model were derived. Evaluation of the work-energy relationships for collisions at or below the elastic limit revealed that the BEV or peak collision force magnitude relationships are bifurcated based upon the residual damage depth. Rather than being additive terms from the linear curve fits employed in the CRASH3 damage analysis algorithm, the Campbell b 0 and CRASH3 AL terms represent the maximum values that can be ascribed to the BEV or peak collision force magnitude, respectively, for collisions that produce zero residual damage. Collisions resulting in the production of non-zero residual damage depth already account for the surpassing of the elastic limit during closure and therefore the secondary addition of the elastic limit terms represents a double accounting of the same. This evaluation shows that the current energy absorbed formulation utilized in the CRASH3 damage analysis algorithm extraneously includes terms associated with the A and G stiffness coefficients. This sole realistic model, however, is limited, secondary to reducing the coefficient of restitution to a constant value for all cases in which the residual damage depth is nonzero. Linearity between BEV or peak collision force magnitude and residual damage depth may be applicable for particular ranges of residual damage depth for any given region of any given vehicle. Within the modeling construct employed by the CRASH3 damage algorithm, the case of uniform and ubiquitous linearity cannot be supported. Considerations regarding the inclusion of internal work recovered and restitution for modeling the separation phase change in velocity magnitude should account for not only the effects present during the evaluation of a vehicle-to-vehicle collision of interest but also to the approach taken for modeling the force-deflection response for each collision partner.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Sudipta; Deb, Debasis
2016-07-01
Digital image correlation (DIC) is a technique developed for monitoring surface deformation/displacement of an object under loading conditions. This method is further refined to make it capable of handling discontinuities on the surface of the sample. A damage zone is referred to a surface area fractured and opened in due course of loading. In this study, an algorithm is presented to automatically detect multiple damage zones in deformed image. The algorithm identifies the pixels located inside these zones and eliminate them from FEM-DIC processes. The proposed algorithm is successfully implemented on several damaged samples to estimate displacement fields of an object under loading conditions. This study shows that displacement fields represent the damage conditions reasonably well as compared to regular FEM-DIC technique without considering the damage zones.
Spatially-Resolved Hydraulic Conductivity Estimation Via Poroelastic Magnetic Resonance Elastography
McGarry, Matthew; Weaver, John B.; Paulsen, Keith D.
2015-01-01
Poroelastic magnetic resonance elastography is an imaging technique that could recover mechanical and hydrodynamical material properties of in vivo tissue. To date, mechanical properties have been estimated while hydrodynamical parameters have been assumed homogeneous with literature-based values. Estimating spatially-varying hydraulic conductivity would likely improve model accuracy and provide new image information related to a tissue’s interstitial fluid compartment. A poroelastic model was reformulated to recover hydraulic conductivity with more appropriate fluid-flow boundary conditions. Simulated and physical experiments were conducted to evaluate the accuracy and stability of the inversion algorithm. Simulations were accurate (property errors were < 2%) even in the presence of Gaussian measurement noise up to 3%. The reformulated model significantly decreased variation in the shear modulus estimate (p≪0.001) and eliminated the homogeneity assumption and the need to assign hydraulic conductivity values from literature. Material property contrast was recovered experimentally in three different tofu phantoms and the accuracy was improved through soft-prior regularization. A frequency-dependence in hydraulic conductivity contrast was observed suggesting that fluid-solid interactions may be more prominent at low frequency. In vivo recovery of both structural and hydrodynamical characteristics of tissue could improve detection and diagnosis of neurological disorders such as hydrocephalus and brain tumors. PMID:24771571
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salajegheh, Nima; Abedrabbo, Nader; Pourboghrat, Farhang
An efficient integration algorithm for continuum damage based elastoplastic constitutive equations is implemented in LS-DYNA. The isotropic damage parameter is defined as the ratio of the damaged surface area over the total cross section area of the representative volume element. This parameter is incorporated into the integration algorithm as an internal variable. The developed damage model is then implemented in the FEM code LS-DYNA as user material subroutine (UMAT). Pure stretch experiments of a hemispherical punch are carried out for copper sheets and the results are compared against the predictions of the implemented damage model. Evaluation of damage parameters ismore » carried out and the optimized values that correctly predicted the failure in the sheet are reported. Prediction of failure in the numerical analysis is performed through element deletion using the critical damage value. The set of failure parameters which accurately predict the failure behavior in copper sheets compared to experimental data is reported as well.« less
Applying a Consumer Behavior Lens to Salt Reduction Initiatives
Potvin Kent, Monique; Raats, Monique M.; McConnon, Áine; Wall, Patrick; Dubois, Lise
2017-01-01
Reformulation of food products to reduce salt content has been a central strategy for achieving population level salt reduction. In this paper, we reflect on current reformulation strategies and consider how consumer behavior determines the ultimate success of these strategies. We consider the merits of adopting a ‘health by stealth’, silent approach to reformulation compared to implementing a communications strategy which draws on labeling initiatives in tandem with reformulation efforts. We end this paper by calling for a multi-actor approach which utilizes co-design, participatory tools to facilitate the involvement of all stakeholders, including, and especially, consumers, in making decisions around how best to achieve population-level salt reduction. PMID:28820449
Evaluation of reformulated thermal control coatings in a simulated space environment. Part 1: YB-71
NASA Technical Reports Server (NTRS)
Cerbus, Clifford A.; Carlin, Patrick S.
1994-01-01
The Air Force Space and Missile Systems Center and Wright Laboratory Materials Directorate (WL/ML) have sponsored and effort to effort to reformulate and qualify Illinois Institute of Technology Research Institute (IITRI) spacecraft thermal control coatings. S13G/LO-1, Z93, and YB-71 coatings were reformulated because the potassium silicate binder, Sylvania PS-7, used in the coatings is no longer manufactured. Coatings utilizing the binder's replacement candidate, Kasil 2130, manufactured by The Philadelphia Quartz (PQ) Corporation, Baltimore, Maryland, and undergoing testing at the Materials Directorate's Space Combined Effects Primary Test and Research Equipment (SCEPTRE) Facility operated by the University of Dayton Research Institute (UDRI). The simulated space environment consists of combined ultraviolet (UV) and electron exposure with in site specimen reflectance measurements. A brief description of the effort at IITRI, results and discussion from testing the reformulated YB-71 coating in SCEPTRE, and plans for further testing of reformulated Z93 and S13G/LO-1 are presented.
Combet, Emilie; Vlassopoulos, Antonis; Mölenberg, Famke; Gressier, Mathilde; Privet, Lisa; Wratten, Craig; Sharif, Sahar; Vieux, Florent; Lehmann, Undine; Masset, Gabriel
2017-04-21
Nutrient profiling ranks foods based on their nutrient composition, with applications in multiple aspects of food policy. We tested the capacity of a category-specific model developed for product reformulation to improve the average nutrient content of foods, using five national food composition datasets (UK, US, China, Brazil, France). Products ( n = 7183) were split into 35 categories based on the Nestlé Nutritional Profiling Systems (NNPS) and were then classified as NNPS 'Pass' if all nutrient targets were met (energy (E), total fat (TF), saturated fat (SFA), sodium (Na), added sugars (AS), protein, calcium). In a modelling scenario, all NNPS Fail products were 'reformulated' to meet NNPS standards. Overall, a third (36%) of all products achieved the NNPS standard/pass (inter-country and inter-category range: 32%-40%; 5%-72%, respectively), with most products requiring reformulation in two or more nutrients. The most common nutrients to require reformulation were SFA (22%-44%) and TF (23%-42%). Modelled compliance with NNPS standards could reduce the average content of SFA, Na and AS (10%, 8% and 6%, respectively) at the food supply level. Despite the good potential to stimulate reformulation across the five countries, the study highlights the need for better data quality and granularity of food composition databases.
Peacock, Amy; Degenhardt, Louisa; Hordern, Antonia; Larance, Briony; Cama, Elena; White, Nancy; Kihas, Ivana; Bruno, Raimondo
2015-12-01
In April 2014, a tamper-resistant controlled-release oxycodone formulation was introduced into the Australian market. This study aimed to identify the level and methods of tampering with reformulated oxycodone, demographic and clinical characteristics of those who reported tampering with reformulated oxycodone, and perceived attractiveness of original and reformulated oxycodone for misuse (via tampering). A prospective cohort of 522 people who regularly tampered with pharmaceutical opioids and had tampered with the original oxycodone product in their lifetime completed two interviews before (January-March 2014: Wave 1) and after (May-August 2014: Wave 2) introduction of reformulated oxycodone. Four-fifths (81%) had tampered with the original oxycodone formulation in the month prior to Wave 1; use and attempted tampering with reformulated oxycodone amongst the sample was comparatively low at Wave 2 (29% and 19%, respectively). Reformulated oxycodone was primarily swallowed (15%), with low levels of recent successful injection (6%), chewing (2%), drinking/dissolving (1%), and smoking (<1%). Participants who tampered with original and reformulated oxycodone were socio-demographically and clinically similar to those who had only tampered with the original formulation, except the former were more likely to report prescribed oxycodone use and stealing pharmaceutical opioid, and less likely to report moderate/severe anxiety. There was significant diversity in the methods for tampering, with attempts predominantly prompted by self-experimentation (rather than informed by word-of-mouth or the internet). Participants rated reformulated oxycodone as more difficult to prepare and inject and less pleasant to use compared to the original formulation. Current findings suggest that the introduction of the tamper-resistant product has been successful at reducing, although not necessarily eliminating, tampering with the controlled-release oxycodone formulation, with lower attractiveness for misuse. Appropriate, effective treatment options must be available with increasing availability of abuse-deterrent products, given the reduction of oxycodone tampering and use amongst a group with high rates of pharmaceutical opioid dependence. Copyright © 2015 Elsevier B.V. All rights reserved.
Otite, Fadar O.; Jacobson, Michael F.; Dahmubed, Aspan
2013-01-01
Introduction Although some US food manufacturers have reduced trans fatty acids (TFA) in their products, it is unknown how much TFA is being reduced, whether pace of reformulation has changed over time, or whether reformulations vary by food type or manufacturer. Methods In 2007, we identified 360 brand-name products in major US supermarkets that contained 0.5 g TFA or more per serving. In 2008, 2010, and 2011, product labels were re-examined to determine TFA content; ingredients lists were also examined in 2011 for partially hydrogenated vegetable oils (PHVO). We assessed changes in TFA content among the 270 products sold in all years between 2007 and 2011 and conducted sensitivity analyses on the 90 products discontinued after 2007. Results By 2011, 178 (66%) of the 270 products had reduced TFA content. Most reformulated products (146 of 178, 82%) reduced TFA to less than 0.5 g per serving, although half of these 146 still contained PHVO. Among all 270 products, mean TFA content decreased 49% between 2007 and 2011, from 1.9 to 0.9 g per serving. Yet, mean TFA reduction slowed over time, from 30.3% (2007–2008) to 12.1% (2008–2010) to 3.4% (2010–2011) (P value for trend < .001). This slowing pace was due to both fewer reformulations among TFA-containing products at start of each period and smaller TFA reductions among reformulated products. Reformulations also varied substantially by both food category and manufacturer, with some eliminating or nearly eliminating TFA and others showing no significant changes. Sensitivity analyses were similar to main findings. Conclusions Some US products and food manufacturers have made progress in reducing TFA, but substantial variation exists by food type and by parent company, and overall progress has significantly slowed over time. Because TFA consumption is harmful even at low levels, our results emphasize the need for continued efforts toward reformulating or discontinuing foods to eliminate PHVO. PMID:23701722
NASA Astrophysics Data System (ADS)
Shi, Binkai; Qiao, Pizhong
2018-03-01
Vibration-based nondestructive testing is an area of growing interest and worthy of exploring new and innovative approaches. The displacement mode shape is often chosen to identify damage due to its local detailed characteristic and less sensitivity to surrounding noise. Requirement for baseline mode shape in most vibration-based damage identification limits application of such a strategy. In this study, a new surface fractal dimension called edge perimeter dimension (EPD) is formulated, from which an EPD-based window dimension locus (EPD-WDL) algorithm for irregularity or damage identification of plate-type structures is established. An analytical notch-type damage model of simply-supported plates is proposed to evaluate notch effect on plate vibration performance; while a sub-domain of notch cases with less effect is selected to investigate robustness of the proposed damage identification algorithm. Then, fundamental aspects of EPD-WDL algorithm in term of notch localization, notch quantification, and noise immunity are assessed. A mathematical solution called isomorphism is implemented to remove false peaks caused by inflexions of mode shapes when applying the EPD-WDL algorithm to higher mode shapes. The effectiveness and practicability of the EPD-WDL algorithm are demonstrated by an experimental procedure on damage identification of an artificially-induced notched aluminum cantilever plate using a measurement system of piezoelectric lead-zirconate (PZT) actuator and scanning laser Doppler vibrometer (SLDV). As demonstrated in both the analytical and experimental evaluations, the new surface fractal dimension technique developed is capable of effectively identifying damage in plate-type structures.
Face recognition using total margin-based adaptive fuzzy support vector machines.
Liu, Yi-Hung; Chen, Yen-Ting
2007-01-01
This paper presents a new classifier called total margin-based adaptive fuzzy support vector machines (TAF-SVM) that deals with several problems that may occur in support vector machines (SVMs) when applied to the face recognition. The proposed TAF-SVM not only solves the overfitting problem resulted from the outlier with the approach of fuzzification of the penalty, but also corrects the skew of the optimal separating hyperplane due to the very imbalanced data sets by using different cost algorithm. In addition, by introducing the total margin algorithm to replace the conventional soft margin algorithm, a lower generalization error bound can be obtained. Those three functions are embodied into the traditional SVM so that the TAF-SVM is proposed and reformulated in both linear and nonlinear cases. By using two databases, the Chung Yuan Christian University (CYCU) multiview and the facial recognition technology (FERET) face databases, and using the kernel Fisher's discriminant analysis (KFDA) algorithm to extract discriminating face features, experimental results show that the proposed TAF-SVM is superior to SVM in terms of the face-recognition accuracy. The results also indicate that the proposed TAF-SVM can achieve smaller error variances than SVM over a number of tests such that better recognition stability can be obtained.
NASA Astrophysics Data System (ADS)
Pitarch, Jaime; Ruiz-Verdú, Antonio; Sendra, María. D.; Santoleri, Rosalia
2017-02-01
We studied the performance of the MERIS maximum peak height (MPH) algorithm in the retrieval of chlorophyll-a concentration (CHL), using a matchup data set of Bottom-of-Rayleigh Reflectances (BRR) and CHL from a hypertrophic lake (Albufera de Valencia). The MPH algorithm produced a slight underestimation of CHL in the pixels classified as cyanobacteria (83% of the total) and a strong overestimation in those classified as eukaryotic phytoplankton (17%). In situ biomass data showed that the binary classification of MPH was not appropriate for mixed phytoplankton populations, producing also unrealistic discontinuities in the CHL maps. We recalibrated MPH using our matchup data set and found that a single calibration curve of third degree fitted equally well to all matchups regardless of how they were classified. As a modification to the former approach, we incorporated the Phycocyanin Index (PCI) in the formula, thus taking into account the gradient of phytoplankton composition, which reduced the CHL retrieval errors. By using in situ biomass data, we also proved that PCI was indeed an indicator of cyanobacterial dominance. We applied our recalibration of the MPH algorithm to the whole MERIS data set (2002-2012). Results highlight the usefulness of the MPH algorithm as a tool to monitor eutrophication. The relevance of this fact is higher since MPH does not require a complete atmospheric correction, which often fails over such waters. An adequate flagging or correction of sun glint is advisable though, since the MPH algorithm was sensitive to sun glint.
Modeled Dietary Impact of Pizza Reformulations in US Children and Adolescents.
Masset, Gabriel; Mathias, Kevin C; Vlassopoulos, Antonis; Mölenberg, Famke; Lehmann, Undine; Gibney, Mike; Drewnowski, Adam
2016-01-01
Approximately 20% of US children and adolescents consume pizza on any given day; and pizza intake is associated with higher intakes of energy, sodium, and saturated fat. The reformulation of pizza products has yet to be evaluated as a viable option to improve diets of the US youth. This study modeled the effect on nutrient intakes of two potential pizza reformulation strategies based on the standards established by the Nestlé Nutritional Profiling System (NNPS). Dietary intakes were retrieved from the first 24hr-recall of the National Health and Nutrition Examination Survey (NHANES) 2011-12, for 2655 participants aged 4-19 years. The composition of pizzas in the NHANES food database (n = 69) were compared against the NNPS standards for energy, total fat, saturated fat, sodium, added sugars, and protein. In a reformulation scenario, the nutrient content of pizzas was adjusted to the NNPS standards if these were not met. In a substitution scenario, pizzas that did not meet the standards were replaced by the closest pizza, based on nutrient content, that met all of the NNPS standards. Pizzas consistent with all the NNPS standards (29% of all pizzas) were significantly lower in energy, saturated fat and sodium than pizzas that were not. Among pizza consumers, modeled intakes in the reformulation and substitution scenarios were lower in energy (-14 and -45kcal, respectively), saturated fat (-1.2 and -2.7g), and sodium (-143 and -153mg) compared to baseline. Potential industry wide reformulation of a single food category or intra-category food substitutions may positively impact dietary intakes of US children and adolescents. Further promotion and support of these complimentary strategies may facilitate the adoption and implementation of reformulation standards.
Modeled Dietary Impact of Pizza Reformulations in US Children and Adolescents
Masset, Gabriel; Mathias, Kevin C.; Vlassopoulos, Antonis; Mölenberg, Famke; Lehmann, Undine; Gibney, Mike; Drewnowski, Adam
2016-01-01
Background and Objective Approximately 20% of US children and adolescents consume pizza on any given day; and pizza intake is associated with higher intakes of energy, sodium, and saturated fat. The reformulation of pizza products has yet to be evaluated as a viable option to improve diets of the US youth. This study modeled the effect on nutrient intakes of two potential pizza reformulation strategies based on the standards established by the Nestlé Nutritional Profiling System (NNPS). Methods Dietary intakes were retrieved from the first 24hr-recall of the National Health and Nutrition Examination Survey (NHANES) 2011–12, for 2655 participants aged 4–19 years. The composition of pizzas in the NHANES food database (n = 69) were compared against the NNPS standards for energy, total fat, saturated fat, sodium, added sugars, and protein. In a reformulation scenario, the nutrient content of pizzas was adjusted to the NNPS standards if these were not met. In a substitution scenario, pizzas that did not meet the standards were replaced by the closest pizza, based on nutrient content, that met all of the NNPS standards. Results Pizzas consistent with all the NNPS standards (29% of all pizzas) were significantly lower in energy, saturated fat and sodium than pizzas that were not. Among pizza consumers, modeled intakes in the reformulation and substitution scenarios were lower in energy (-14 and -45kcal, respectively), saturated fat (-1.2 and -2.7g), and sodium (-143 and -153mg) compared to baseline. Conclusions Potential industry wide reformulation of a single food category or intra-category food substitutions may positively impact dietary intakes of US children and adolescents. Further promotion and support of these complimentary strategies may facilitate the adoption and implementation of reformulation standards. PMID:27706221
Reformulations of the Yang-Mills theory toward quark confinement and mass gap
NASA Astrophysics Data System (ADS)
Kondo, Kei-Ichi; Kato, Seikou; Shibata, Akihiro; Shinohara, Toru
2016-01-01
We propose the reformulations of the SU (N) Yang-Mills theory toward quark confinement and mass gap. In fact, we have given a new framework for reformulating the SU (N) Yang-Mills theory using new field variables. This includes the preceding works given by Cho, Faddeev and Niemi, as a special case called the maximal option in our reformulations. The advantage of our reformulations is that the original non-Abelian gauge field variables can be changed into the new field variables such that one of them called the restricted field gives the dominant contribution to quark confinement in the gauge-independent way. Our reformulations can be combined with the SU (N) extension of the Diakonov-Petrov version of the non-Abelian Stokes theorem for the Wilson loop operator to give a gauge-invariant definition for the magnetic monopole in the SU (N) Yang-Mills theory without the scalar field. In the so-called minimal option, especially, the restricted field is non-Abelian and involves the non-Abelian magnetic monopole with the stability group U (N- 1). This suggests the non-Abelian dual superconductivity picture for quark confinement. This should be compared with the maximal option: the restricted field is Abelian and involves only the Abelian magnetic monopoles with the stability group U(1)N-1, just like the Abelian projection. We give some applications of this reformulation, e.g., the stability for the homogeneous chromomagnetic condensation of the Savvidy type, the large N treatment for deriving the dimensional transmutation and understanding the mass gap, and also the numerical simulations on a lattice which are given by Dr. Shibata in a subsequent talk.
Dusty gas with one fluid in smoothed particle hydrodynamics
NASA Astrophysics Data System (ADS)
Laibe, Guillaume; Price, Daniel J.
2014-05-01
In a companion paper we have shown how the equations describing gas and dust as two fluids coupled by a drag term can be re-formulated to describe the system as a single-fluid mixture. Here, we present a numerical implementation of the one-fluid dusty gas algorithm using smoothed particle hydrodynamics (SPH). The algorithm preserves the conservation properties of the SPH formalism. In particular, the total gas and dust mass, momentum, angular momentum and energy are all exactly conserved. Shock viscosity and conductivity terms are generalized to handle the two-phase mixture accordingly. The algorithm is benchmarked against a comprehensive suit of problems: DUSTYBOX, DUSTYWAVE, DUSTYSHOCK and DUSTYOSCILL, each of them addressing different properties of the method. We compare the performance of the one-fluid algorithm to the standard two-fluid approach. The one-fluid algorithm is found to solve both of the fundamental limitations of the two-fluid algorithm: it is no longer possible to concentrate dust below the resolution of the gas (they have the same resolution by definition), and the spatial resolution criterion h < csts, required in two-fluid codes to avoid over-damping of kinetic energy, is unnecessary. Implicit time-stepping is straightforward. As a result, the algorithm is up to ten billion times more efficient for 3D simulations of small grains. Additional benefits include the use of half as many particles, a single kernel and fewer SPH interpolations. The only limitation is that it does not capture multi-streaming of dust in the limit of zero coupling, suggesting that in this case a hybrid approach may be required.
Damage identification on spatial Timoshenko arches by means of genetic algorithms
NASA Astrophysics Data System (ADS)
Greco, A.; D'Urso, D.; Cannizzaro, F.; Pluchino, A.
2018-05-01
In this paper a procedure for the dynamic identification of damage in spatial Timoshenko arches is presented. The proposed approach is based on the calculation of an arbitrary number of exact eigen-properties of a damaged spatial arch by means of the Wittrick and Williams algorithm. The proposed damage model considers a reduction of the volume in a part of the arch, and is therefore suitable, differently than what is commonly proposed in the main part of the dedicated literature, not only for concentrated cracks but also for diffused damaged zones which may involve a loss of mass. Different damage scenarios can be taken into account with variable location, intensity and extension of the damage as well as number of damaged segments. An optimization procedure, aiming at identifying which damage configuration minimizes the difference between its eigen-properties and a set of measured modal quantities for the structure, is implemented making use of genetic algorithms. In this context, an initial random population of chromosomes, representing different damage distributions along the arch, is forced to evolve towards the fittest solution. Several applications with different, single or multiple, damaged zones and boundary conditions confirm the validity and the applicability of the proposed procedure even in presence of instrumental errors on the measured data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fei; Huang, Yongxi
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
NASA Astrophysics Data System (ADS)
Tsalamengas, John L.
2018-07-01
We study plane-wave electromagnetic scattering by radially and strongly inhomogeneous dielectric cylinders at oblique incidence. The method of analysis relies on an exact reformulation of the underlying field equations as a first-order 4 × 4 system of differential equations and on the ability to restate the associated initial-value problem in the form of a system of coupled linear Volterra integral equations of the second kind. The integral equations so derived are discretized via a sophisticated variant of the Nyström method. The proposed method yields results accurate up to machine precision without relying on approximations. Numerical results and case studies ably demonstrate the efficiency and high accuracy of the algorithms.
A parametric LQ approach to multiobjective control system design
NASA Technical Reports Server (NTRS)
Kyr, Douglas E.; Buchner, Marc
1988-01-01
The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.
Xie, Fei; Huang, Yongxi
2018-02-04
Here, we develop a multistage, stochastic mixed-integer model to support biofuel supply chain expansion under evolving uncertainties. By utilizing the block-separable recourse property, we reformulate the multistage program in an equivalent two-stage program and solve it using an enhanced nested decomposition method with maximal non-dominated cuts. We conduct extensive numerical experiments and demonstrate the application of the model and algorithm in a case study based on the South Carolina settings. The value of multistage stochastic programming method is also explored by comparing the model solution with the counterparts of an expected value based deterministic model and a two-stage stochastic model.
Assimilating Eulerian and Lagrangian data in traffic-flow models
NASA Astrophysics Data System (ADS)
Xia, Chao; Cochrane, Courtney; DeGuire, Joseph; Fan, Gaoyang; Holmes, Emma; McGuirl, Melissa; Murphy, Patrick; Palmer, Jenna; Carter, Paul; Slivinski, Laura; Sandstede, Björn
2017-05-01
Data assimilation of traffic flow remains a challenging problem. One difficulty is that data come from different sources ranging from stationary sensors and camera data to GPS and cell phone data from moving cars. Sensors and cameras give information about traffic density, while GPS data provide information about the positions and velocities of individual cars. Previous methods for assimilating Lagrangian data collected from individual cars relied on specific properties of the underlying computational model or its reformulation in Lagrangian coordinates. These approaches make it hard to assimilate both Eulerian density and Lagrangian positional data simultaneously. In this paper, we propose an alternative approach that allows us to assimilate both Eulerian and Lagrangian data. We show that the proposed algorithm is accurate and works well in different traffic scenarios and regardless of whether ensemble Kalman or particle filters are used. We also show that the algorithm is capable of estimating parameters and assimilating real traffic observations and synthetic observations obtained from microscopic models.
Geometric constrained variational calculus I: Piecewise smooth extremals
NASA Astrophysics Data System (ADS)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2015-05-01
A geometric setup for constrained variational calculus is presented. The analysis deals with the study of the extremals of an action functional defined on piecewise differentiable curves, subject to differentiable, non-holonomic constraints. Special attention is paid to the tensorial aspects of the theory. As far as the kinematical foundations are concerned, a fully covariant scheme is developed through the introduction of the concept of infinitesimal control. The standard classification of the extremals into normal and abnormal ones is discussed, pointing out the existence of an algebraic algorithm assigning to each admissible curve a corresponding abnormality index, related to the co-rank of a suitable linear map. Attention is then shifted to the study of the first variation of the action functional. The analysis includes a revisitation of Pontryagin's equations and of the Lagrange multipliers method, as well as a reformulation of Pontryagin's algorithm in Hamiltonian terms. The analysis is completed by a general result, concerning the existence of finite deformations with fixed endpoints.
Optimization of a Lunar Pallet Lander Reinforcement Structure Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Burt, Adam O.; Hull, Patrick V.
2014-01-01
This paper presents a design automation process using optimization via a genetic algorithm to design the conceptual structure of a Lunar Pallet Lander. The goal is to determine a design that will have the primary natural frequencies at or above a target value as well as minimize the total mass. Several iterations of the process are presented. First, a concept optimization is performed to determine what class of structure would produce suitable candidate designs. From this a stiffened sheet metal approach was selected leading to optimization of beam placement through generating a two-dimensional mesh and varying the physical location of reinforcing beams. Finally, the design space is reformulated as a binary problem using 1-dimensional beam elements to truncate the design space to allow faster convergence and additional mechanical failure criteria to be included in the optimization responses. Results are presented for each design space configuration. The final flight design was derived from these results.
Memory sparing, fast scattering formalism for rigorous diffraction modeling
NASA Astrophysics Data System (ADS)
Iff, W.; Kämpfe, T.; Jourlin, Y.; Tishchenko, A. V.
2017-07-01
The basics and algorithmic steps of a novel scattering formalism suited for memory sparing and fast electromagnetic calculations are presented. The formalism, called ‘S-vector algorithm’ (by analogy with the known scattering-matrix algorithm), allows the calculation of the collective scattering spectra of individual layered micro-structured scattering objects. A rigorous method of linear complexity is applied to model the scattering at individual layers; here the generalized source method (GSM) resorting to Fourier harmonics as basis functions is used as one possible method of linear complexity. The concatenation of the individual scattering events can be achieved sequentially or in parallel, both having pros and cons. The present development will largely concentrate on a consecutive approach based on the multiple reflection series. The latter will be reformulated into an implicit formalism which will be associated with an iterative solver, resulting in improved convergence. The examples will first refer to 1D grating diffraction for the sake of simplicity and intelligibility, with a final 2D application example.
NASA Astrophysics Data System (ADS)
Kassa, Semu Mitiku; Tsegay, Teklay Hailay
2017-08-01
Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.
Reformulated gasoline (RFG) is gasoline blended to burn cleaner and reduce smog-forming and toxic pollutants in the air we breathe. The Clean Air Act requires that RFG be used to reduce harmful emissions of ozone.
2013-01-01
Background The consumption of partially hydrogenated vegetable oils (PHVOs) high in trans fat is associated with an increased risk of cardiovascular disease and other non-communicable diseases. In response to high intakes of PHVOs, the Indian government has proposed regulation to set limits on the amount of trans fat permissible in PHVOs. Global recommendations are to replace PHVOs with polyunsaturated fatty acids (PUFAs) in order to optimise health benefits; however, little is known about the practicalities of implementation in low-income settings. The aim of this study was to examine the technical and economic feasibility of reducing trans fat in PHVOs and reformulating it using healthier fats. Methods Thirteen semi-structured interviews were conducted with manufacturers and technical experts of PHVOs in India. Data were open-coded and organised according to key themes. Results Interviewees indicated that reformulating PHVOs was both economically and technically feasible provided that trans fat regulation takes account of the food technology challenges associated with product reformulation. However, there will be challenges in maintaining the physical properties that consumers prefer while reducing the trans fat in PHVOs. The availability of input oils was not seen to be a problem because of the low cost and high availability of imported palm oil, which was the input oil of choice for industry. Most interviewees were not concerned about the potential increase in saturated fat associated with increased use of palm oil and were not planning to use PUFAs in product reformulation. Interviewees indicated that many smaller manufacturers would not have sufficient capacity to reformulate products to reduce trans fat. Conclusions Reformulating PHVOs to reduce trans fat in India is feasible; however, a collision course exists where the public health goal to replace PHVOs with PUFA are opposed to the goals of industry to produce a cheap alternative product that meets consumer preferences. Ensuring that product reformulation is done in a way that maximises health benefits will require shifts in knowledge and subsequent demand of products, decreased reliance on palm oil, investment in research and development and increased capacity for smaller manufacturers. PMID:24308642
Downs, Shauna M; Gupta, Vidhu; Ghosh-Jerath, Suparna; Lock, Karen; Thow, Anne Marie; Singh, Archna
2013-12-05
The consumption of partially hydrogenated vegetable oils (PHVOs) high in trans fat is associated with an increased risk of cardiovascular disease and other non-communicable diseases. In response to high intakes of PHVOs, the Indian government has proposed regulation to set limits on the amount of trans fat permissible in PHVOs. Global recommendations are to replace PHVOs with polyunsaturated fatty acids (PUFAs) in order to optimise health benefits; however, little is known about the practicalities of implementation in low-income settings. The aim of this study was to examine the technical and economic feasibility of reducing trans fat in PHVOs and reformulating it using healthier fats. Thirteen semi-structured interviews were conducted with manufacturers and technical experts of PHVOs in India. Data were open-coded and organised according to key themes. Interviewees indicated that reformulating PHVOs was both economically and technically feasible provided that trans fat regulation takes account of the food technology challenges associated with product reformulation. However, there will be challenges in maintaining the physical properties that consumers prefer while reducing the trans fat in PHVOs. The availability of input oils was not seen to be a problem because of the low cost and high availability of imported palm oil, which was the input oil of choice for industry. Most interviewees were not concerned about the potential increase in saturated fat associated with increased use of palm oil and were not planning to use PUFAs in product reformulation. Interviewees indicated that many smaller manufacturers would not have sufficient capacity to reformulate products to reduce trans fat. Reformulating PHVOs to reduce trans fat in India is feasible; however, a collision course exists where the public health goal to replace PHVOs with PUFA are opposed to the goals of industry to produce a cheap alternative product that meets consumer preferences. Ensuring that product reformulation is done in a way that maximises health benefits will require shifts in knowledge and subsequent demand of products, decreased reliance on palm oil, investment in research and development and increased capacity for smaller manufacturers.
Discrete particle swarm optimization for identifying community structures in signed social networks.
Cai, Qing; Gong, Maoguo; Shen, Bo; Ma, Lijia; Jiao, Licheng
2014-10-01
Modern science of networks has facilitated us with enormous convenience to the understanding of complex systems. Community structure is believed to be one of the notable features of complex networks representing real complicated systems. Very often, uncovering community structures in networks can be regarded as an optimization problem, thus, many evolutionary algorithms based approaches have been put forward. Particle swarm optimization (PSO) is an artificial intelligent algorithm originated from social behavior such as birds flocking and fish schooling. PSO has been proved to be an effective optimization technique. However, PSO was originally designed for continuous optimization which confounds its applications to discrete contexts. In this paper, a novel discrete PSO algorithm is suggested for identifying community structures in signed networks. In the suggested method, particles' status has been redesigned in discrete form so as to make PSO proper for discrete scenarios, and particles' updating rules have been reformulated by making use of the topology of the signed network. Extensive experiments compared with three state-of-the-art approaches on both synthetic and real-world signed networks demonstrate that the proposed method is effective and promising. Copyright © 2014 Elsevier Ltd. All rights reserved.
Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît
2016-01-01
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826
Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering.
Gong, Maoguo; Zhou, Zhiqiang; Ma, Jingjing
2012-04-01
This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.
Xu, Zixiang; Zheng, Ping; Sun, Jibin; Ma, Yanhe
2013-01-01
Gene knockout has been used as a common strategy to improve microbial strains for producing chemicals. Several algorithms are available to predict the target reactions to be deleted. Most of them apply mixed integer bi-level linear programming (MIBLP) based on metabolic networks, and use duality theory to transform bi-level optimization problem of large-scale MIBLP to single-level programming. However, the validity of the transformation was not proved. Solution of MIBLP depends on the structure of inner problem. If the inner problem is continuous, Karush-Kuhn-Tucker (KKT) method can be used to reformulate the MIBLP to a single-level one. We adopt KKT technique in our algorithm ReacKnock to attack the intractable problem of the solution of MIBLP, demonstrated with the genome-scale metabolic network model of E. coli for producing various chemicals such as succinate, ethanol, threonine and etc. Compared to the previous methods, our algorithm is fast, stable and reliable to find the optimal solutions for all the chemical products tested, and able to provide all the alternative deletion strategies which lead to the same industrial objective. PMID:24348984
McAdams, Hiramie T [Carrollton, IL; Crawford, Robert W [Tucson, AZ; Hadder, Gerald R [Oak Ridge, TN; McNutt, Barry D [Arlington, VA
2006-03-28
Reformulated diesel fuels for automotive diesel engines which meet the requirements of ASTM 975-02 and provide significantly reduced emissions of nitrogen oxides (NO.sub.x) and particulate matter (PM) relative to commercially available diesel fuels.
Frozen Gaussian approximation for 3D seismic tomography
NASA Astrophysics Data System (ADS)
Chai, Lihui; Tong, Ping; Yang, Xu
2018-05-01
Three-dimensional (3D) wave-equation-based seismic tomography is computationally challenging in large scales and high-frequency regime. In this paper, we apply the frozen Gaussian approximation (FGA) method to compute 3D sensitivity kernels and seismic tomography of high-frequency. Rather than standard ray theory used in seismic inversion (e.g. Kirchhoff migration and Gaussian beam migration), FGA is used to compute the 3D high-frequency sensitivity kernels for travel-time or full waveform inversions. Specifically, we reformulate the equations of the forward and adjoint wavefields for the purpose of convenience to apply FGA, and with this reformulation, one can efficiently compute the Green’s functions whose convolutions with source time function produce wavefields needed for the construction of 3D kernels. Moreover, a fast summation method is proposed based on local fast Fourier transform which greatly improves the speed of reconstruction as the last step of FGA algorithm. We apply FGA to both the travel-time adjoint tomography and full waveform inversion (FWI) on synthetic crosswell seismic data with dominant frequencies as high as those of real crosswell data, and confirm again that FWI requires a more sophisticated initial velocity model for the convergence than travel-time adjoint tomography. We also numerically test the accuracy of applying FGA to local earthquake tomography. This study paves the way to directly apply wave-equation-based seismic tomography methods into real data around their dominant frequencies.
[Common law study of the legal responsibility of health care staff related to drug reformulation].
Reche-Castex, F J; Alonso Herreros, J M
2005-01-01
To analyze the responsibility of health care staff in drug reformulation (change of dose, pharmaceutical form or route of administration of medicinal products) based on the common law of the High Court and the National Court. Search and analysis of common law and legal studies included in databases "El Derecho", "Difusión Jurídica" and "Indret". Health care staff has means--not outcomes--obligations according to the care standards included in the "Lex Artis" that can go beyond the mere legal standards. Failure to apply these care standards, denial of assistance or disrespect to the autonomy of the patient can be negligent behavior. We found 4 cases in common law. In the two cases in which care standards were complied with, including reformulation, health care professionals were acquitted, whereas in the other two cases in which reformulations were not used even though the "Lex Artis" required them, the professionals were condemned. Reformulation of medicinal products, as set forth in the Lex Artis, is a practice accepted by the High Court and the National Court and failure to use it when the scientific knowledge advises so is a cause for conviction.
FORTRAN Versions of Reformulated HFGMC Codes
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Aboudi, Jacob; Bednarcyk, Brett A.
2006-01-01
Several FORTRAN codes have been written to implement the reformulated version of the high-fidelity generalized method of cells (HFGMC). Various aspects of the HFGMC and its predecessors were described in several prior NASA Tech Briefs articles, the most recent being HFGMC Enhancement of MAC/GMC (LEW-17818-1), NASA Tech Briefs, Vol. 30, No. 3 (March 2006), page 34. The HFGMC is a mathematical model of micromechanics for simulating stress and strain responses of fiber/matrix and other composite materials. The HFGMC overcomes a major limitation of a prior version of the GMC by accounting for coupling of shear and normal stresses and thereby affords greater accuracy, albeit at a large computational cost. In the reformulation of the HFGMC, the issue of computational efficiency was addressed: as a result, codes that implement the reformulated HFGMC complete their calculations about 10 times as fast as do those that implement the HFGMC. The present FORTRAN implementations of the reformulated HFGMC were written to satisfy a need for compatibility with other FORTRAN programs used to analyze structures and composite materials. The FORTRAN implementations also afford capabilities, beyond those of the basic HFGMC, for modeling inelasticity, fiber/matrix debonding, and coupled thermal, mechanical, piezo, and electromagnetic effects.
Reformulated Gasoline Market Affected Refiners Differently, 1995
1996-01-01
This article focuses on the costs of producing reformulated gasoline (RFG) as experienced by different types of refiners and on how these refiners fared this past summer, given the prices for RFG at the refinery gate.
NASA Astrophysics Data System (ADS)
Fujita, Yasunori
2007-09-01
Reformulation of economics by physics has been carried out intensively to reveal many features of the asset market, which were missed in the classical economic theories. The present paper attempts to shed new light on this field. That is, this paper aims at reformulating the international trade model by making use of the real option theory. Based on such a stochastic dynamic model, we examine how the fluctuation of the foreign exchange rate makes effect on the welfare of the exporting country.
Efficient Reformulation of HOTFGM: Heat Conduction with Variable Thermal Conductivity
NASA Technical Reports Server (NTRS)
Zhong, Yi; Pindera, Marek-Jerzy; Arnold, Steven M. (Technical Monitor)
2002-01-01
Functionally graded materials (FGMs) have become one of the major research topics in the mechanics of materials community during the past fifteen years. FGMs are heterogeneous materials, characterized by spatially variable microstructure, and thus spatially variable macroscopic properties, introduced to enhance material or structural performance. The spatially variable material properties make FGMs challenging to analyze. The review of the various techniques employed to analyze the thermodynamical response of FGMs reveals two distinct and fundamentally different computational strategies, called uncoupled macromechanical and coupled micromechanical approaches by some investigators. The uncoupled macromechanical approaches ignore the effect of microstructural gradation by employing specific spatial variations of material properties, which are either assumed or obtained by local homogenization, thereby resulting in erroneous results under certain circumstances. In contrast, the coupled approaches explicitly account for the micro-macrostructural interaction, albeit at a significantly higher computational cost. The higher-order theory for functionally graded materials (HOTFGM) developed by Aboudi et al. is representative of the coupled approach. However, despite its demonstrated utility in applications where micro-macrostructural coupling effects are important, the theory's full potential is yet to be realized because the original formulation of HOTFGM is computationally intensive. This, in turn, limits the size of problems that can be solved due to the large number of equations required to mimic realistic material microstructures. Therefore, a basis for an efficient reformulation of HOTFGM, referred to as user-friendly formulation, is developed herein, and subsequently employed in the construction of the efficient reformulation using the local/global conductivity matrix approach. In order to extend HOTFGM's range of applicability, spatially variable thermal conductivity capability at the local level is incorporated into the efficient reformulation. Analytical solutions to validate both the user-friendly and efficient reformulations am also developed. Volume discretization sensitivity and validation studies, as well as a practical application of the developed efficient reformulation are subsequently carried out. The presented results illustrate the accuracy and implementability of both the user-friendly formulation and the efficient reformulation of HOTFGM.
Extension of the Reformulated Gasoline Program to Maine’s Southern Counties Additional Resources
Supporting documents on EPA's decision about extending the Clean Air Act prohibition against the sale of conventional gasoline in reformulated gasoline areas to the southern Maine counties of York, Cumberland,Sagadahoc
Frequency Response Function Based Damage Identification for Aerospace Structures
NASA Astrophysics Data System (ADS)
Oliver, Joseph Acton
Structural health monitoring technologies continue to be pursued for aerospace structures in the interests of increased safety and, when combined with health prognosis, efficiency in life-cycle management. The current dissertation develops and validates damage identification technology as a critical component for structural health monitoring of aerospace structures and, in particular, composite unmanned aerial vehicles. The primary innovation is a statistical least-squares damage identification algorithm based in concepts of parameter estimation and model update. The algorithm uses frequency response function based residual force vectors derived from distributed vibration measurements to update a structural finite element model through statistically weighted least-squares minimization producing location and quantification of the damage, estimation uncertainty, and an updated model. Advantages compared to other approaches include robust applicability to systems which are heavily damped, large, and noisy, with a relatively low number of distributed measurement points compared to the number of analytical degrees-of-freedom of an associated analytical structural model (e.g., modal finite element model). Motivation, research objectives, and a dissertation summary are discussed in Chapter 1 followed by a literature review in Chapter 2. Chapter 3 gives background theory and the damage identification algorithm derivation followed by a study of fundamental algorithm behavior on a two degree-of-freedom mass-spring system with generalized damping. Chapter 4 investigates the impact of noise then successfully proves the algorithm against competing methods using an analytical eight degree-of-freedom mass-spring system with non-proportional structural damping. Chapter 5 extends use of the algorithm to finite element models, including solutions for numerical issues, approaches for modeling damping approximately in reduced coordinates, and analytical validation using a composite sandwich plate model. Chapter 6 presents the final extension to experimental systems-including methods for initial baseline correlation and data reduction-and validates the algorithm on an experimental composite plate with impact damage. The final chapter deviates from development and validation of the primary algorithm to discuss development of an experimental scaled-wing test bed as part of a collaborative effort for developing structural health monitoring and prognosis technology. The dissertation concludes with an overview of technical conclusions and recommendations for future work.
Luo, Yuan; Szolovits, Peter
2016-01-01
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen's interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen's relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions.
Luo, Yuan; Szolovits, Peter
2016-01-01
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen’s interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen’s relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions. PMID:27478379
The segmentation of Thangka damaged regions based on the local distinction
NASA Astrophysics Data System (ADS)
Xuehui, Bi; Huaming, Liu; Xiuyou, Wang; Weilan, Wang; Yashuai, Yang
2017-01-01
Damaged regions must be segmented before digital repairing Thangka cultural relics. A new segmentation algorithm based on local distinction is proposed for segmenting damaged regions, taking into account some of the damaged area with a transition zone feature, as well as the difference between the damaged regions and their surrounding regions, combining local gray value, local complexity and local definition-complexity (LDC). Firstly, calculate the local complexity and normalized; secondly, calculate the local definition-complexity and normalized; thirdly, calculate the local distinction; finally, set the threshold to segment local distinction image, remove the over segmentation, and get the final segmentation result. The experimental results show that our algorithm is effective, and it can segment the damaged frescoes and natural image etc.
Chan, Eugene; Rose, L R Francis; Wang, Chun H
2015-05-01
Existing damage imaging algorithms for detecting and quantifying structural defects, particularly those based on diffraction tomography, assume far-field conditions for the scattered field data. This paper presents a major extension of diffraction tomography that can overcome this limitation and utilises a near-field multi-static data matrix as the input data. This new algorithm, which employs numerical solutions of the dynamic Green's functions, makes it possible to quantitatively image laminar damage even in complex structures for which the dynamic Green's functions are not available analytically. To validate this new method, the numerical Green's functions and the multi-static data matrix for laminar damage in flat and stiffened isotropic plates are first determined using finite element models. Next, these results are time-gated to remove boundary reflections, followed by discrete Fourier transform to obtain the amplitude and phase information for both the baseline (damage-free) and the scattered wave fields. Using these computationally generated results and experimental verification, it is shown that the new imaging algorithm is capable of accurately determining the damage geometry, size and severity for a variety of damage sizes and shapes, including multi-site damage. Some aspects of minimal sensors requirement pertinent to image quality and practical implementation are also briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
A Simplified Algorithm for Statistical Investigation of Damage Spreading
NASA Astrophysics Data System (ADS)
Gecow, Andrzej
2009-04-01
On the way to simulating adaptive evolution of complex system describing a living object or human developed project, a fitness should be defined on node states or network external outputs. Feedbacks lead to circular attractors of these states or outputs which make it difficult to define a fitness. The main statistical effects of adaptive condition are the result of small change tendency and to appear, they only need a statistically correct size of damage initiated by evolutionary change of system. This observation allows to cut loops of feedbacks and in effect to obtain a particular statistically correct state instead of a long circular attractor which in the quenched model is expected for chaotic network with feedback. Defining fitness on such states is simple. We calculate only damaged nodes and only once. Such an algorithm is optimal for investigation of damage spreading i.e. statistical connections of structural parameters of initial change with the size of effected damage. It is a reversed-annealed method—function and states (signals) may be randomly substituted but connections are important and are preserved. The small damages important for adaptive evolution are correctly depicted in comparison to Derrida annealed approximation which expects equilibrium levels for large networks. The algorithm indicates these levels correctly. The relevant program in Pascal, which executes the algorithm for a wide range of parameters, can be obtained from the author.
Analytical and numerical analysis of frictional damage in quasi brittle materials
NASA Astrophysics Data System (ADS)
Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.
2016-07-01
Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.
Stylistic Reformulation: Theoretical Premises and Practical Applications.
ERIC Educational Resources Information Center
Schultz, Jean Marie
1994-01-01
Various aspects of writing style are discussed to propose concrete methods for improving students' performance. Topics covered include the relationship between syntactic and cognitive complexity and classroom techniques and the reformulation technique as applied to student writing samples. (Contains 20 references.) (LB)
Motor fuels : issues related to reformulated gasoline, oxygenated fuels, and biofuels
DOT National Transportation Integrated Search
1996-06-01
This report by the General Accounting Office summarizes (1) the results of : federal and other studies on the cost-effectiveness of using reformulated : gasoline compared to other measures to control automotive emissions and compare : the price estim...
NASA Technical Reports Server (NTRS)
Wilt, Thomas E.; Arnold, Steven M.; Saleeb, Atef F.
1997-01-01
A fatigue damage computational algorithm utilizing a multiaxial, isothermal, continuum-based fatigue damage model for unidirectional metal-matrix composites has been implemented into the commercial finite element code MARC using MARC user subroutines. Damage is introduced into the finite element solution through the concept of effective stress that fully couples the fatigue damage calculations with the finite element deformation solution. Two applications using the fatigue damage algorithm are presented. First, an axisymmetric stress analysis of a circumferentially reinforced ring, wherein both the matrix cladding and the composite core were assumed to behave elastic-perfectly plastic. Second, a micromechanics analysis of a fiber/matrix unit cell using both the finite element method and the generalized method of cells (GMC). Results are presented in the form of S-N curves and damage distribution plots.
A coupled/uncoupled deformation and fatigue damage algorithm utilizing the finite element method
NASA Technical Reports Server (NTRS)
Wilt, Thomas E.; Arnold, Steven M.
1994-01-01
A fatigue damage computational algorithm utilizing a multiaxial, isothermal, continuum based fatigue damage model for unidirectional metal matrix composites has been implemented into the commercial finite element code MARC using MARC user subroutines. Damage is introduced into the finite element solution through the concept of effective stress which fully couples the fatigue damage calculations with the finite element deformation solution. An axisymmetric stress analysis was performed on a circumferentially reinforced ring, wherein both the matrix cladding and the composite core were assumed to behave elastic-perfectly plastic. The composite core behavior was represented using Hill's anisotropic continuum based plasticity model, and similarly, the matrix cladding was represented by an isotropic plasticity model. Results are presented in the form of S-N curves and damage distribution plots.
Reformulations of the Yang-Mills theory toward quark confinement and mass gap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kondo, Kei-Ichi; Shinohara, Toru; Kato, Seikou
2016-01-22
We propose the reformulations of the SU (N) Yang-Mills theory toward quark confinement and mass gap. In fact, we have given a new framework for reformulating the SU (N) Yang-Mills theory using new field variables. This includes the preceding works given by Cho, Faddeev and Niemi, as a special case called the maximal option in our reformulations. The advantage of our reformulations is that the original non-Abelian gauge field variables can be changed into the new field variables such that one of them called the restricted field gives the dominant contribution to quark confinement in the gauge-independent way. Our reformulationsmore » can be combined with the SU (N) extension of the Diakonov-Petrov version of the non-Abelian Stokes theorem for the Wilson loop operator to give a gauge-invariant definition for the magnetic monopole in the SU (N) Yang-Mills theory without the scalar field. In the so-called minimal option, especially, the restricted field is non-Abelian and involves the non-Abelian magnetic monopole with the stability group U (N− 1). This suggests the non-Abelian dual superconductivity picture for quark confinement. This should be compared with the maximal option: the restricted field is Abelian and involves only the Abelian magnetic monopoles with the stability group U(1){sup N−1}, just like the Abelian projection. We give some applications of this reformulation, e.g., the stability for the homogeneous chromomagnetic condensation of the Savvidy type, the large N treatment for deriving the dimensional transmutation and understanding the mass gap, and also the numerical simulations on a lattice which are given by Dr. Shibata in a subsequent talk.« less
Plasma spectrum peak extraction algorithm of laser film damage
NASA Astrophysics Data System (ADS)
Zhao, Dan; Su, Jun-hong; Xu, Jun-qi
2012-10-01
The plasma spectrometry is an emerging method to distinguish the thin-film laser damage. Laser irradiation film surface occurrence of flash, using the spectrometer receives the flash spectrum, extracting the spectral peak, and by means of the spectra of the thin-film materials and the atmosphere has determine the difference, as a standard to determine the film damage. Plasma spectrometry can eliminate the miscarriage of justice which caused by atmospheric flashes, and distinguish high accuracy. Plasma spectra extraction algorithm is the key technology of Plasma spectrometry. Firstly, data de noising and smoothing filter is introduced in this paper, and then during the peak is detecting, the data packet is proposed, and this method can increase the stability and accuracy of the spectral peak recognition. Such algorithm makes simultaneous measurement of Plasma spectrometry to detect thin film laser damage, and greatly improves work efficiency.
Acoustic emission localization based on FBG sensing network and SVR algorithm
NASA Astrophysics Data System (ADS)
Sai, Yaozhang; Zhao, Xiuxia; Hou, Dianli; Jiang, Mingshun
2017-03-01
In practical application, carbon fiber reinforced plastics (CFRP) structures are easy to appear all sorts of invisible damages. So the damages should be timely located and detected for the safety of CFPR structures. In this paper, an acoustic emission (AE) localization system based on fiber Bragg grating (FBG) sensing network and support vector regression (SVR) is proposed for damage localization. AE signals, which are caused by damage, are acquired by high speed FBG interrogation. According to the Shannon wavelet transform, time differences between AE signals are extracted for localization algorithm based on SVR. According to the SVR model, the coordinate of AE source can be accurately predicted without wave velocity. The FBG system and localization algorithm are verified on a 500 mm×500 mm×2 mm CFRP plate. The experimental results show that the average error of localization system is 2.8 mm and the training time is 0.07 s.
The Reformulated Model of Learned Helplessness: An Empirical Test.
ERIC Educational Resources Information Center
Rothblum, Esther D.; Green, Leon
Abramson, Seligman and Teasdale's reformulated model of learned helplessness hypothesized that an attribution of causality intervenes between the perception of noncontingency and the future expectation of future noncontingency. To test this model, relationships between attribution and performance under failure, success, and control conditions were…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-22
... Submitted to OMB for Review and Approval; Comment Request; Reformulated Gasoline Commingling Provisions... Protection Agency has submitted an information collection request (ICR), Reformulated Gasoline Commingling...: EPA would like to continue collecting notifications from gasoline retailers and wholesale purchaser...
This page summaries the final rule determining that the Atlanta metro area is no longer a federal reformulated gasoline (RFG) covered area and there is no requirement to use federal RFG in the Atlanta area.
Ontological Approach to Military Knowledge Modeling and Management
2004-03-01
federated search mechanism has to reformulate user queries (expressed using the ontology) in the query languages of the different sources (e.g. SQL...ontologies as a common terminology – Unified query to perform federated search • Query processing – Ontology mapping to sources reformulate queries
Shan, Liran C; De Brún, Aoife; Henchion, Maeve; Li, Chenguang; Murrin, Celine; Wall, Patrick G; Monahan, Frank J
2017-09-01
Recent innovations in processed meats focus on healthier reformulations through reducing negative constituents and/or adding health beneficial ingredients. This study explored the influence of base meat product (ham, sausages, beef burger), salt and/or fat content (reduced or not), healthy ingredients (omega 3, vitamin E, none), and price (average or higher than average) on consumers' purchase intention and quality judgement of processed meats. A survey (n=481) using conjoint methodology and cluster analysis was conducted. Price and base meat product were most important for consumers' purchase intention, followed by healthy ingredient and salt and/or fat content. In reformulation, consumers had a preference for ham and sausages over beef burgers, and for reduced salt and/or fat over non reduction. In relation to healthy ingredients, omega 3 was preferred over none, and vitamin E was least preferred. Healthier reformulations improved the perceived healthiness of processed meats. Cluster analyses identified three consumer segments with different product preferences. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reformulated gasolines: The experience of Mexico City Metropolitan Zone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alvarez, H.B.; Jardon, R.T.; Echeverria, R.S.
1997-12-31
The introduction of several reformulated gasolines into the Mexico City Metropolitan Zone (MCMZ) in the middle 1986 is an example of using fuel composition to improve, in theory, the air quality. However, although these changes have resulted in an important reduction of lead airborne concentrations, a worsened situation has been created. Ozone levels in the atmosphere MCMZ have presented a sudden rise since the introduction of the first reformulated gasoline, reaching in the 1990`s an annual average of 1,700 exceedances to the Mexican Ozone Air Quality Standard (0.11 ppm not to be exceeded 1 hr. a day one day amore » year). The authors examine the tendency on ozone air quality in MCMZ in relation with the changes in gasoline composition since 1986. The authors also discuss the importance to perform an air quality impact analysis before the introduction of reformulated gasolines in countries where the local economy do not allow to change the old car fleet not fitted with exhaust treatment devices.« less
Process for conversion of lignin to reformulated, partially oxygenated gasoline
Shabtai, Joseph S.; Zmierczak, Wlodzimierz W.; Chornet, Esteban
2001-01-09
A high-yield process for converting lignin into reformulated, partially oxygenated gasoline compositions of high quality is provided. The process is a two-stage catalytic reaction process that produces a reformulated, partially oxygenated gasoline product with a controlled amount of aromatics. In the first stage of the process, a lignin feed material is subjected to a base-catalyzed depolymerization reaction, followed by a selective hydrocracking reaction which utilizes a superacid catalyst to produce a high oxygen-content depolymerized lignin product mainly composed of alkylated phenols, alkylated alkoxyphenols, and alkylbenzenes. In the second stage of the process, the depolymerized lignin product is subjected to an exhaustive etherification reaction, optionally followed by a partial ring hydrogenation reaction, to produce a reformulated, partially oxygenated/etherified gasoline product, which includes a mixture of substituted phenyl/methyl ethers, cycloalkyl methyl ethers, C.sub.7 -C.sub.10 alkylbenzenes, C.sub.6 -C.sub.10 branched and multibranched paraffins, and alkylated and polyalkylated cycloalkanes.
NASA Astrophysics Data System (ADS)
Chen, Dechao; Zhang, Yunong
2017-10-01
Dual-arm redundant robot systems are usually required to handle primary tasks, repetitively and synchronously in practical applications. In this paper, a jerk-level synchronous repetitive motion scheme is proposed to remedy the joint-angle drift phenomenon and achieve the synchronous control of a dual-arm redundant robot system. The proposed scheme is novelly resolved at jerk level, which makes the joint variables, i.e. joint angles, joint velocities and joint accelerations, smooth and bounded. In addition, two types of dynamics algorithms, i.e. gradient-type (G-type) and zeroing-type (Z-type) dynamics algorithms, for the design of repetitive motion variable vectors, are presented in detail with the corresponding circuit schematics. Subsequently, the proposed scheme is reformulated as two dynamical quadratic programs (DQPs) and further integrated into a unified DQP (UDQP) for the synchronous control of a dual-arm robot system. The optimal solution of the UDQP is found by the piecewise-linear projection equation neural network. Moreover, simulations and comparisons based on a six-degrees-of-freedom planar dual-arm redundant robot system substantiate the operation effectiveness and tracking accuracy of the robot system with the proposed scheme for repetitive motion and synchronous control.
Manifold learning-based subspace distance for machinery damage assessment
NASA Astrophysics Data System (ADS)
Sun, Chuang; Zhang, Zhousuo; He, Zhengjia; Shen, Zhongjie; Chen, Binqiang
2016-03-01
Damage assessment is very meaningful to keep safety and reliability of machinery components, and vibration analysis is an effective way to carry out the damage assessment. In this paper, a damage index is designed by performing manifold distance analysis on vibration signal. To calculate the index, vibration signal is collected firstly, and feature extraction is carried out to obtain statistical features that can capture signal characteristics comprehensively. Then, manifold learning algorithm is utilized to decompose feature matrix to be a subspace, that is, manifold subspace. The manifold learning algorithm seeks to keep local relationship of the feature matrix, which is more meaningful for damage assessment. Finally, Grassmann distance between manifold subspaces is defined as a damage index. The Grassmann distance reflecting manifold structure is a suitable metric to measure distance between subspaces in the manifold. The defined damage index is applied to damage assessment of a rotor and the bearing, and the result validates its effectiveness for damage assessment of machinery component.
Pearson-Stuttard, Jonathan; Kypridemos, Chris; Collins, Brendan; Mozaffarian, Dariush; Huang, Yue; Bandosz, Piotr; Capewell, Simon; Whitsel, Laurie; Wilde, Parke; O'Flaherty, Martin; Micha, Renata
2018-04-01
Sodium consumption is a modifiable risk factor for higher blood pressure (BP) and cardiovascular disease (CVD). The US Food and Drug Administration (FDA) has proposed voluntary sodium reduction goals targeting processed and commercially prepared foods. We aimed to quantify the potential health and economic impact of this policy. We used a microsimulation approach of a close-to-reality synthetic population (US IMPACT Food Policy Model) to estimate CVD deaths and cases prevented or postponed, quality-adjusted life years (QALYs), and cost-effectiveness from 2017 to 2036 of 3 scenarios: (1) optimal, 100% compliance with 10-year reformulation targets; (2) modest, 50% compliance with 10-year reformulation targets; and (3) pessimistic, 100% compliance with 2-year reformulation targets, but with no further progress. We used the National Health and Nutrition Examination Survey and high-quality meta-analyses to inform model inputs. Costs included government costs to administer and monitor the policy, industry reformulation costs, and CVD-related healthcare, productivity, and informal care costs. Between 2017 and 2036, the optimal reformulation scenario achieving the FDA sodium reduction targets could prevent approximately 450,000 CVD cases (95% uncertainty interval: 240,000 to 740,000), gain approximately 2.1 million discounted QALYs (1.7 million to 2.4 million), and produce discounted cost savings (health savings minus policy costs) of approximately $41 billion ($14 billion to $81 billion). In the modest and pessimistic scenarios, health gains would be 1.1 million and 0.7 million QALYS, with savings of $19 billion and $12 billion, respectively. All the scenarios were estimated with more than 80% probability to be cost-effective (incremental cost/QALY < $100,000) by 2021 and to become cost-saving by 2031. Limitations include evaluating only diseases mediated through BP, while decreasing sodium consumption could have beneficial effects upon other health burdens such as gastric cancer. Further, the effect estimates in the model are based on interventional and prospective observational studies. They are therefore subject to biases and confounding that may have influenced also our model estimates. Implementing and achieving the FDA sodium reformulation targets could generate substantial health gains and net cost savings.
Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications
NASA Astrophysics Data System (ADS)
He, K.; Zhu, W. D.
2011-07-01
A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.
Reformulations of Yang–Mills theories with space–time tensor fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Zhi-Qiang, E-mail: gzhqedu@gmail.com
2016-01-15
We provide the reformulations of Yang–Mills theories in terms of gauge invariant metric-like variables in three and four dimensions. The reformulations are used to analyze the dimension two gluon condensate and give gauge invariant descriptions of gluon polarization. In three dimensions, we obtain a non-zero dimension two gluon condensate by one loop computation, whose value is similar to the square of photon mass in the Schwinger model. In four dimensions, we obtain a Lagrangian with the dual property, which shares the similar but different property with the dual superconductor scenario. We also make discussions on the effectiveness of one loopmore » approximation.« less
Reformulation of the relativistic conversion between coordinate time and atomic time
NASA Technical Reports Server (NTRS)
Thomas, J. B.
1975-01-01
The relativistic conversion between coordinate time and atomic time is reformulated to allow simpler time calculations relating analysis in solar system barycentric coordinates (using coordinate time) with earth-fixed observations (measuring 'earth-bound' proper time or atomic time). After an interpretation in terms of relatively well-known concepts, this simplified formulation, which has a rate accuracy of about 10 to the minus 15th, is used to explain the conventions required in the synchronization of a worldwide clock network and to analyze two synchronization techniques - portable clocks and radio interferometry. Finally, pertinent experimental tests of relativity are briefly discussed in terms of the reformulated time conversion.
Acquisition of Expert/Non-Expert Vocabulary from Reformulations.
Antoine, Edwige; Grabar, Natalia
2017-01-01
Technical medical terms are complicated to be correctly understood by non-experts. Vocabulary, associating technical terms with layman expressions, can help in increasing the readability of technical texts and their understanding. The purpose of our work is to build this kind of vocabulary. We propose to exploit the notion of reformulation following two methods: extraction of abbreviations and of reformulations with specific markers. The segments associated thanks to these methods are aligned with medical terminologies. Our results allow to cover over 9,000 medical terms and show precision of extractions between 0.24 and 0.98. The results and analyzed and compared with the existing work.
Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.
2003-01-01
A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.
Failure Analysis for Composition of Web Services Represented as Labeled Transition Systems
NASA Astrophysics Data System (ADS)
Nadkarni, Dinanath; Basu, Samik; Honavar, Vasant; Lutz, Robyn
The Web service composition problem involves the creation of a choreographer that provides the interaction between a set of component services to realize a goal service. Several methods have been proposed and developed to address this problem. In this paper, we consider those scenarios where the composition process may fail due to incomplete specification of goal service requirements or due to the fact that the user is unaware of the functionality provided by the existing component services. In such cases, it is desirable to have a composition algorithm that can provide feedback to the user regarding the cause of failure in the composition process. Such feedback will help guide the user to re-formulate the goal service and iterate the composition process. We propose a failure analysis technique for composition algorithms that views Web service behavior as multiple sequences of input/output events. Our technique identifies the possible cause of composition failure and suggests possible recovery options to the user. We discuss our technique using a simple e-Library Web service in the context of the MoSCoE Web service composition framework.
Soft gluon evolution and non-global logarithms
NASA Astrophysics Data System (ADS)
Martínez, René Ángeles; De Angelis, Matthew; Forshaw, Jeffrey R.; Plätzer, Simon; Seymour, Michael H.
2018-05-01
We consider soft-gluon evolution at the amplitude level. Our evolution algorithm applies to generic hard-scattering processes involving any number of coloured partons and we present a reformulation of the algorithm in such a way as to make the cancellation of infrared divergences explicit. We also emphasise the special role played by a Lorentz-invariant evolution variable, which coincides with the transverse momentum of the latest emission in a suitably defined dipole zero-momentum frame. Handling large colour matrices presents the most significant challenge to numerical implementations and we present a means to expand systematically about the leading colour approximation. Specifically, we present a systematic procedure to calculate the resulting colour traces, which is based on the colour flow basis. Identifying the leading contribution leads us to re-derive the Banfi-Marchesini-Smye equation. However, our formalism is more general and can systematically perform resummation of contributions enhanced by the t'Hooft coupling α s N ˜ 1, along with successive perturbations that are parametrically suppressed by powers of 1 /N . We also discuss how our approach relates to earlier work.
Computing the Evans function via solving a linear boundary value ODE
NASA Astrophysics Data System (ADS)
Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn
2015-11-01
Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
40 CFR 80.75 - Reporting requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... reformulated gasoline or RBOB produced or imported during the following time periods: (i) The first quarterly... to gasoline produced or imported during 1994 shall be included in the first quarterly report in 1995...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.75 Reporting requirements. Any refiner or...
Uncritical Educational Theory in the Guise of Progressive Social Science?
ERIC Educational Resources Information Center
Lingelbach, Karl Christoph
1988-01-01
Examines H. E. Tenorth's critique of research on the Nazi educational system and states that Herman Nohl's historio-educational concept is being reformulated within Tenorth's argumentational framework. Discusses pedagogical and sociological deficits in these efforts of "reformulation" as well as difficulties which arise when applying…
Creativity, Problem Solving, and Solution Set Sightedness: Radically Reformulating BVSR
ERIC Educational Resources Information Center
Simonton, Dean Keith
2012-01-01
Too often, psychological debates become polarized into dichotomous positions. Such polarization may have occurred with respect to Campbell's (1960) blind variation and selective retention (BVSR) theory of creativity. To resolve this unnecessary controversy, BVSR was radically reformulated with respect to creative problem solving. The reformulation…
Combet, Emilie; Vlassopoulos, Antonis; Mölenberg, Famke; Gressier, Mathilde; Privet, Lisa; Wratten, Craig; Sharif, Sahar; Vieux, Florent; Lehmann, Undine; Masset, Gabriel
2017-01-01
Nutrient profiling ranks foods based on their nutrient composition, with applications in multiple aspects of food policy. We tested the capacity of a category-specific model developed for product reformulation to improve the average nutrient content of foods, using five national food composition datasets (UK, US, China, Brazil, France). Products (n = 7183) were split into 35 categories based on the Nestlé Nutritional Profiling Systems (NNPS) and were then classified as NNPS ‘Pass’ if all nutrient targets were met (energy (E), total fat (TF), saturated fat (SFA), sodium (Na), added sugars (AS), protein, calcium). In a modelling scenario, all NNPS Fail products were ‘reformulated’ to meet NNPS standards. Overall, a third (36%) of all products achieved the NNPS standard/pass (inter-country and inter-category range: 32%–40%; 5%–72%, respectively), with most products requiring reformulation in two or more nutrients. The most common nutrients to require reformulation were SFA (22%–44%) and TF (23%–42%). Modelled compliance with NNPS standards could reduce the average content of SFA, Na and AS (10%, 8% and 6%, respectively) at the food supply level. Despite the good potential to stimulate reformulation across the five countries, the study highlights the need for better data quality and granularity of food composition databases. PMID:28430118
Robust feature extraction for rapid classification of damage in composites
NASA Astrophysics Data System (ADS)
Coelho, Clyde K.; Reynolds, Whitney; Chattopadhyay, Aditi
2009-03-01
The ability to detect anomalies in signals from sensors is imperative for structural health monitoring (SHM) applications. Many of the candidate algorithms for these applications either require a lot of training examples or are very computationally inefficient for large sample sizes. The damage detection framework presented in this paper uses a combination of Linear Discriminant Analysis (LDA) along with Support Vector Machines (SVM) to obtain a computationally efficient classification scheme for rapid damage state determination. LDA was used for feature extraction of damage signals from piezoelectric sensors on a composite plate and these features were used to train the SVM algorithm in parts, reducing the computational intensity associated with the quadratic optimization problem that needs to be solved during training. SVM classifiers were organized into a binary tree structure to speed up classification, which also reduces the total training time required. This framework was validated on composite plates that were impacted at various locations. The results show that the algorithm was able to correctly predict the different impact damage cases in composite laminates using less than 21 percent of the total available training data after data reduction.
Combined distributed and concentrated transducer network for failure indication
NASA Astrophysics Data System (ADS)
Ostachowicz, Wieslaw; Wandowski, Tomasz; Malinowski, Pawel
2010-03-01
In this paper algorithm for discontinuities localisation in thin panels made of aluminium alloy is presented. Mentioned algorithm uses Lamb wave propagation methods for discontinuities localisation. Elastic waves were generated and received using piezoelectric transducers. They were arranged in concentrated arrays distributed on the specimen surface. In this way almost whole specimen could be monitored using this combined distributed-concentrated transducer network. Excited elastic waves propagate and reflect from panel boundaries and discontinuities existing in the panel. Wave reflection were registered through the piezoelectric transducers and used in signal processing algorithm. Proposed processing algorithm consists of two parts: signal filtering and extraction of obstacles location. The first part was used in order to enhance signals by removing noise from them. Second part allowed to extract features connected with wave reflections from discontinuities. Extracted features damage influence maps were a basis to create damage influence maps. Damage maps indicated intensity of elastic wave reflections which corresponds to obstacles coordinates. Described signal processing algorithms were implemented in the MATLAB environment. It should be underlined that in this work results based only on experimental signals were presented.
Demand, Supply, and Price Outlook for Reformulated Motor Gasoline 1995
1994-01-01
Provisions of the Clean Air Act Amendments of 1990 designed to reduce ground-level ozone will increase the demand for reformulated motor gasoline in a number of U.S. metropolitan areas. This article discusses the effects of the new regulations on the motor gasoline market and the refining industry.
40 CFR 80.41 - Standards and requirements for compliance.
Code of Federal Regulations, 2011 CFR
2011-07-01
... following standards apply for all reformulated gasoline: (1) The standard for heavy metals, including lead or manganese, on a per-gallon basis, is that reformulated gasoline may contain no heavy metals. The Administrator may waive this prohibition for a heavy metal (other than lead) if the Administrator determines...
40 CFR 80.41 - Standards and requirements for compliance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... following standards apply for all reformulated gasoline: (1) The standard for heavy metals, including lead or manganese, on a per-gallon basis, is that reformulated gasoline may contain no heavy metals. The Administrator may waive this prohibition for a heavy metal (other than lead) if the Administrator determines...
40 CFR 80.41 - Standards and requirements for compliance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... following standards apply for all reformulated gasoline: (1) The standard for heavy metals, including lead or manganese, on a per-gallon basis, is that reformulated gasoline may contain no heavy metals. The Administrator may waive this prohibition for a heavy metal (other than lead) if the Administrator determines...
40 CFR 80.41 - Standards and requirements for compliance.
Code of Federal Regulations, 2012 CFR
2012-07-01
... following standards apply for all reformulated gasoline: (1) The standard for heavy metals, including lead or manganese, on a per-gallon basis, is that reformulated gasoline may contain no heavy metals. The Administrator may waive this prohibition for a heavy metal (other than lead) if the Administrator determines...
40 CFR 80.41 - Standards and requirements for compliance.
Code of Federal Regulations, 2014 CFR
2014-07-01
... following standards apply for all reformulated gasoline: (1) The standard for heavy metals, including lead or manganese, on a per-gallon basis, is that reformulated gasoline may contain no heavy metals. The Administrator may waive this prohibition for a heavy metal (other than lead) if the Administrator determines...
An Approach to Revision and Evaluation of Student Writing.
ERIC Educational Resources Information Center
Duke, Charles R.
An approach to evaluating student writing that emphasizes reformulation and deemphasizes grades teaches students that reworking their writing is a necessary and acceptable part of the writing process. Reformulation is divided into rewriting, revising, and editing. The instructor diagnoses student papers to determine significant problems on a…
USDA-ARS?s Scientific Manuscript database
Reformulation of calcium chloride cover brine for cucumber fermentation was explored as a mean to minimize the incidence of bloater defect. This study particularly focused on cover brine supplementation with calcium hydroxide, sodium chloride (NaCl), and acids to enhance buffer capacity, inhibit the...
ERIC Educational Resources Information Center
Belkin, N. J.; Cool, C.; Kelly, D.; Lin, S. -J.; Park, S. Y.; Perez-Carballo, J.; Sikora, C.
2001-01-01
Reports on the progressive investigation of techniques for supporting interactive query reformulation in the TREC (Text Retrieval Conference) Interactive Track. Highlights include methods of term suggestion; interface design to support different system functionalities; an overview of each year's TREC investigation; and relevance to the development…
NASA Astrophysics Data System (ADS)
Yun, S.; Agram, P. S.; Fielding, E. J.; Simons, M.; Webb, F.; Tanaka, A.; Lundgren, P.; Owen, S. E.; Rosen, P. A.; Hensley, S.
2011-12-01
Under ARIA (Advanced Rapid Imaging and Analysis) project at JPL and Caltech, we developed a prototype algorithm to detect surface property change caused by natural or man-made damage using InSAR coherence change. The algorithm was tested on building demolition and construction sites in downtown Pasadena, California. The developed algorithm performed significantly better, producing 150 % higher signal-to-noise ratio, than a standard coherence change detection method. We applied the algorithm to February 2011 M6.3 Christchurch earthquake in New Zealand, 2011 M9.0 Tohoku-oki earthquake in Japan, and 2011 Kirishima volcano eruption in Kyushu, Japan, using ALOS PALSAR data. In Christchurch area we detected three different types of damage: liquefaction, building collapse, and landslide. The detected liquefaction damage is extensive in the eastern suburbs of Christchurch, showing Bexley as one of the most significantly affected areas as was reported in the media. Some places show sharp boundaries of liquefaction damage, indicating different type of ground materials that might have been formed by the meandering Avon River in the past. Well reported damaged buildings such as Christchurch Cathedral, Canterbury TV building, Pyne Gould building, and Cathedral of the Blessed Sacrament were detected by the algorithm. A landslide in Redcliffs was also clearly detected. These detected damage sites were confirmed with Google earth images provided by GeoEye. Larger-scale damage pattern also agrees well with the ground truth damage assessment map indicated with polygonal zones of 3 different damage levels, compiled by the government of New Zealand. The damage proxy map of Sendai area in Japan shows man-made structure damage due to the tsunami caused by the M9.0 Tohoku-oki earthquake. Long temporal baseline (~2.7 years) and volume scattering caused significant decorrelation in the farmlands and bush forest along the coastline. The 2011 Kirishima volcano eruption caused a lot of ash fall deposit in the southeast from the volcano. The detected ash fall damage area exactly matches the in-situ measurements implemented through fieldwork by Geological Survey of Japan. With 99-percentile threshold for damage detection, the periphery of the detected damage area aligns with a contour line of 100 kg/m2 ash deposit, equivalent to 10 cm of depth assuming a density of 1000 kg/m3 for the ash layer. With growing number of InSAR missions, rapidly produced accurate damage assessment maps will help save people, assisting effective prioritization of rescue operations at early stage of response, and significantly improve timely situational awareness for emergency management and national / international assessment and response for recovery planning. Results of this study will also inform the design of future InSAR missions including the proposed DESDynI.
A novel aliasing-free subband information fusion approach for wideband sparse spectral estimation
NASA Astrophysics Data System (ADS)
Luo, Ji-An; Zhang, Xiao-Ping; Wang, Zhi
2017-12-01
Wideband sparse spectral estimation is generally formulated as a multi-dictionary/multi-measurement (MD/MM) problem which can be solved by using group sparsity techniques. In this paper, the MD/MM problem is reformulated as a single sparse indicative vector (SIV) recovery problem at the cost of introducing an additional system error. Thus, the number of unknowns is reduced greatly. We show that the system error can be neglected under certain conditions. We then present a new subband information fusion (SIF) method to estimate the SIV by jointly utilizing all the frequency bins. With orthogonal matching pursuit (OMP) leveraging the binary property of SIV's components, we develop a SIF-OMP algorithm to reconstruct the SIV. The numerical simulations demonstrate the performance of the proposed method.
Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations
Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha
2015-04-30
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less
Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg
2016-12-13
We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.
Advances in mixed-integer programming methods for chemical production scheduling.
Velez, Sara; Maravelias, Christos T
2014-01-01
The goal of this paper is to critically review advances in the area of chemical production scheduling over the past three decades and then present two recently proposed solution methods that have led to dramatic computational enhancements. First, we present a general framework and problem classification and discuss modeling and solution methods with an emphasis on mixed-integer programming (MIP) techniques. Second, we present two solution methods: (a) a constraint propagation algorithm that allows us to compute parameters that are then used to tighten MIP scheduling models and (b) a reformulation that introduces new variables, thus leading to effective branching. We also present computational results and an example illustrating how these methods are implemented, as well as the resulting enhancements. We close with a discussion of open research challenges and future research directions.
Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation
NASA Astrophysics Data System (ADS)
Bueno, Diana R.; Montano, L.
2017-04-01
Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.
Structural health monitoring feature design by genetic programming
NASA Astrophysics Data System (ADS)
Harvey, Dustin Y.; Todd, Michael D.
2014-09-01
Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and other high-capital or life-safety critical structures. Conventional data processing involves pre-processing and extraction of low-dimensional features from in situ time series measurements. The features are then input to a statistical pattern recognition algorithm to perform the relevant classification or regression task necessary to facilitate decisions by the SHM system. Traditional design of signal processing and feature extraction algorithms can be an expensive and time-consuming process requiring extensive system knowledge and domain expertise. Genetic programming, a heuristic program search method from evolutionary computation, was recently adapted by the authors to perform automated, data-driven design of signal processing and feature extraction algorithms for statistical pattern recognition applications. The proposed method, called Autofead, is particularly suitable to handle the challenges inherent in algorithm design for SHM problems where the manifestation of damage in structural response measurements is often unclear or unknown. Autofead mines a training database of response measurements to discover information-rich features specific to the problem at hand. This study provides experimental validation on three SHM applications including ultrasonic damage detection, bearing damage classification for rotating machinery, and vibration-based structural health monitoring. Performance comparisons with common feature choices for each problem area are provided demonstrating the versatility of Autofead to produce significant algorithm improvements on a wide range of problems.
Application of higher order SVD to vibration-based system identification and damage detection
NASA Astrophysics Data System (ADS)
Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang
2012-04-01
Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.
40 CFR 80.74 - Recordkeeping requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
...; (8) In the case of butane blended into reformulated gasoline or RBOB under § 80.82, documentation of: (i) The volume of butane added; (ii) The volume of reformulated gasoline or RBOB both prior to and subsequent to the butane blending; (iii) The purity and properties of the butane specified in § 80.82(c) and...
Talking It through: Two French Immersion Learners' Response to Reformulation
ERIC Educational Resources Information Center
Swain, Merrill; Lapkin, Sharon
2002-01-01
This article documents the importance of collaborative dialogue as part of the process of second language learning. The stimulus for the dialogue we discuss in this article was a reformulation of a story written collaboratively in French by Nina and Dara, two adolescent French immersion students. A sociocultural theoretical perspective informs the…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-28
... operations to reformulate their products until October 21, 2012. SUPPLEMENTARY INFORMATION: The Organic Foods... processors are currently using amidated, non-organic pectin in their products. The industry indicated that these processors would need time to reformulate these products using either non-amidated, non-organic...
40 CFR 80.76 - Registration of refiners, importers or oxygenate blenders.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.76... required for any refiner and importer that produces or imports any reformulated gasoline or RBOB, and any... November 1, 1994, or not later than three months in advance of the first date that such person will produce...
40 CFR 80.76 - Registration of refiners, importers or oxygenate blenders.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.76... required for any refiner and importer that produces or imports any reformulated gasoline or RBOB, and any... November 1, 1994, or not later than three months in advance of the first date that such person will produce...
Reformulating Testing to Measure Thinking and Learning. Technical Report No. 6898.
ERIC Educational Resources Information Center
Collins, Allan
This paper discusses systemic problems with testing and outlines two scenarios for reformulating testing based on intelligent tutoring systems. Five desiderata are provided to underpin the type of testing proposed: (1) tests should emphasize learning and thinking; (2) tests should require generation as well as selection; (3) tests should be…
Life Event Types and Attributional Styles as Predictors of Depression in the Elderly.
ERIC Educational Resources Information Center
Patrick, Linda F.; Moore, Janet S.
The reformulated learned helplessness model for the prediction of depression has been investigated extensively in young adults. Results have linked attributions made to undesirable, controllable events to depression in this age group. This reformulated model was investigated in 97 elderly women and was contrasted to the original learned…
Stereotyping in he Representation of Narrative Texts through Visual Reformulation.
ERIC Educational Resources Information Center
Porto, Melina
2003-01-01
Investigated the process of stereotyping in the representation of the content of narrative texts through visual reformulations. Subjects were Argentine college students enrolled in an English course at a university in Argentina. Reveals students' inability to transcend heir cultural biases and points to an urgent need to address stereotypes in the…
Pigat, S; Connolly, A; Cushen, M; Cullen, M; O'Mahony, C
2018-02-19
This project quantified the impact that voluntary reformulation efforts of the food industry had on the Irish population's nutrient intake. Nutrient composition data on reformulated products were collected from 14 major food companies for two years, 2005 and 2012. Probabilistic intake assessments were performed using the Irish national food consumption surveys as dietary intake data. The nutrient data were weighted by market shares replacing existing food composition data for these products. The reformulation efforts assessed, significantly reduced mean energy intakes by up to 12 kcal/d (adults), 15 kcal/d (teens), 19 kcal/d (children) and 9 kcal/d (pre-schoolers). Mean daily fat intakes were reduced by up to 1.3 g/d, 1.3 g/d, 0.9 g/d and 0.6 g/d, saturated fat intakes by up to 1.7 g/d, 2.3 g/d, 1.8 g/d and 1 g/d, sugar intakes by up to 1 g/d, 2 g/d, 3.5 g/d and 1 g/d and sodium intakes by up to 0.6 g/d, 0.5 g/d, 0.2 g/d, 0.3 g/d for adults, teenagers, children and pre-school children, respectively. This model enables to assess the impact of industry reformulation amongst Irish consumers' nutrient intakes, using consumption, food composition and market share data.
Huang, Yue; Bandosz, Piotr; Capewell, Simon; Wilde, Parke
2018-01-01
Background Sodium consumption is a modifiable risk factor for higher blood pressure (BP) and cardiovascular disease (CVD). The US Food and Drug Administration (FDA) has proposed voluntary sodium reduction goals targeting processed and commercially prepared foods. We aimed to quantify the potential health and economic impact of this policy. Methods and findings We used a microsimulation approach of a close-to-reality synthetic population (US IMPACT Food Policy Model) to estimate CVD deaths and cases prevented or postponed, quality-adjusted life years (QALYs), and cost-effectiveness from 2017 to 2036 of 3 scenarios: (1) optimal, 100% compliance with 10-year reformulation targets; (2) modest, 50% compliance with 10-year reformulation targets; and (3) pessimistic, 100% compliance with 2-year reformulation targets, but with no further progress. We used the National Health and Nutrition Examination Survey and high-quality meta-analyses to inform model inputs. Costs included government costs to administer and monitor the policy, industry reformulation costs, and CVD-related healthcare, productivity, and informal care costs. Between 2017 and 2036, the optimal reformulation scenario achieving the FDA sodium reduction targets could prevent approximately 450,000 CVD cases (95% uncertainty interval: 240,000 to 740,000), gain approximately 2.1 million discounted QALYs (1.7 million to 2.4 million), and produce discounted cost savings (health savings minus policy costs) of approximately $41 billion ($14 billion to $81 billion). In the modest and pessimistic scenarios, health gains would be 1.1 million and 0.7 million QALYS, with savings of $19 billion and $12 billion, respectively. All the scenarios were estimated with more than 80% probability to be cost-effective (incremental cost/QALY < $100,000) by 2021 and to become cost-saving by 2031. Limitations include evaluating only diseases mediated through BP, while decreasing sodium consumption could have beneficial effects upon other health burdens such as gastric cancer. Further, the effect estimates in the model are based on interventional and prospective observational studies. They are therefore subject to biases and confounding that may have influenced also our model estimates. Conclusions Implementing and achieving the FDA sodium reformulation targets could generate substantial health gains and net cost savings. PMID:29634725
Generic entry, reformulations and promotion of SSRIs in the US.
Huskamp, Haiden A; Donohue, Julie M; Koss, Catherine; Berndt, Ernst R; Frank, Richard G
2008-01-01
Previous research has shown that a manufacturer's promotional strategy for a brand name drug is typically affected by generic entry. However, little is known about how newer strategies to extend patent life, including product reformulation introduction or obtaining approval to market for additional clinical indications, influence promotion. To examine the relationships among promotional expenditures, generic entry, reformulation entry and new indication approval. We used quarterly data on national product-level promotional spending (including expenditures for physician detailing and direct-to-consumer advertising [DTCA], and the retail value of free samples distributed in physician offices) for selective serotonin reuptake inhibitors (SSRIs) over the period 1997-2004. We estimated econometric models of detailing, DTCA and total quarterly promotional expenditures as a function of the timing of generic entry, entry of new product formulations and US FDA approval for new clinical indications for existing medications in the SSRI class. Expenditures by pharmaceutical manufacturers for promotion of antidepressant medications was the main outcome measure. Over the period 1997-2004, there was considerable variation in the composition of promotional expenditures across the SSRIs. Promotional expenditures for the original brand molecule decreased dramatically when a reformulation of the molecule was introduced. Promotional spending (both total and detailing alone) for a specific molecule was generally lower after generic entry than before, although the effect of generic entry on promotional spending appears to be closely linked with the choice of product reformulation strategy pursued by the manufacturer. Detailing expenditures for Paxil were increased after the manufacturer received FDA approval to market the drug for generalized anxiety disorder (GAD), while the likelihood of DTCA outlays for the drug was not changed. In contrast, FDA approval to market Paxil and Zoloft for social anxiety disorder (SAD) did not affect the manufacturers' detailing expenditures but did result in a greater likelihood of DTCA outlays. The introduction of new product formulations appears to be a common strategy for attempting to extend market exclusivity for medications facing impending generic entry. Manufacturers who introduced a reformulation before generic entry shifted most promotion dollars from the original brand to the reformulation long before generic entry, and in some cases manufacturers appeared to target a particular promotion type for a given indication. Given the significant impact that pharmaceutical promotion has on demand for prescription drugs in the US, these findings have important implications for prescription drug spending and public health.
Development of a Near-Real Time Hail Damage Swath Identification Algorithm for Vegetation
NASA Technical Reports Server (NTRS)
Bell, Jordan R.; Molthan, Andrew L.; Schultz, Lori A.; McGrath, Kevin M.; Burks, Jason E.
2015-01-01
The Midwest is home to one of the world's largest agricultural growing regions. Between the time period of late May through early September, and with irrigation and seasonal rainfall these crops are able to reach their full maturity. Using moderate to high resolution remote sensors, the monitoring of the vegetation can be achieved using the red and near-infrared wavelengths. These wavelengths allow for the calculation of vegetation indices, such as Normalized Difference Vegetation Index (NDVI). The vegetation growth and greenness, in this region, grows and evolves uniformly as the growing season progresses. However one of the biggest threats to Midwest vegetation during the time period is thunderstorms that bring large hail and damaging winds. Hail and wind damage to crops can be very expensive to crop growers and, damage can be spread over long swaths associated with the tracks of the damaging storms. Damage to the vegetation can be apparent in remotely sensed imagery and is visible from space after storms slightly damage the crops, allowing for changes to occur slowly over time as the crops wilt or more readily apparent if the storms strip material from the crops or destroy them completely. Previous work on identifying these hail damage swaths used manual interpretation by the way of moderate and higher resolution satellite imagery. With the development of an automated and near-real time hail swath damage identification algorithm, detection can be improved, and more damage indicators be created in a faster and more efficient way. The automated detection of hail damage swaths will examine short-term, large changes in the vegetation by differencing near-real time eight day NDVI composites and comparing them to post storm imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua and Visible Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi NPP. In addition land surface temperatures from these instruments will be examined as for hail damage swath identification. Initial validation of the automated algorithm is based upon Storm Prediction Center storm reports but also the National Severe Storm Laboratory (NSSL) Maximum Estimated Size Hail (MESH) product. Opportunities for future work are also shown, with focus on expansion of this algorithm with pixel-based image classification techniques for tracking surface changes as a result of severe weather.
NASA Astrophysics Data System (ADS)
Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene
2016-07-01
Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.
NASA Astrophysics Data System (ADS)
Camacho-Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Moreno-Beltrán, Gustavo; Quiroga, Jabid
2017-05-01
Continuous monitoring for damage detection in structural assessment comprises implementation of low cost equipment and efficient algorithms. This work describes the stages involved in the design of a methodology with high feasibility to be used in continuous damage assessment. Specifically, an algorithm based on a data-driven approach by using principal component analysis and pre-processing acquired signals by means of cross-correlation functions, is discussed. A carbon steel pipe section and a laboratory tower were used as test structures in order to demonstrate the feasibility of the methodology to detect abrupt changes in the structural response when damages occur. Two types of damage cases are studied: crack and leak for each structure, respectively. Experimental results show that the methodology is promising in the continuous monitoring of real structures.
Fault Detection of Bearing Systems through EEMD and Optimization Algorithm
Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan
2017-01-01
This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772
What Friedrich Nietzsche Cannot Stand about Education: Toward a Pedagogy of Self-Reformulation.
ERIC Educational Resources Information Center
Bingham, Charles
2001-01-01
Examines Nietzsche's rejection of mass education, arguing that it was based on his desire for education to be more self- reformulative than he thought possible, and concluding that education in schools is beneficial because it can foster radical forms of selfhood. This process can begin by listening to Nietzsche's philosophy while ignoring his…
Addiction Motivation Reformulated: An Affective Processing Model of Negative Reinforcement
ERIC Educational Resources Information Center
Baker, Timothy B.; Piper, Megan E.; McCarthy, Danielle E.; Majeskie, Matthew R.; Fiore, Michael C.
2004-01-01
This article offers a reformulation of the negative reinforcement model of drug addiction and proposes that the escape and avoidance of negative affect is the prepotent motive for addictive drug use. The authors posit that negative affect is the motivational core of the withdrawal syndrome and argue that, through repeated cycles of drug use and…
Exploring the Role of Reformulations and a Model Text in EFL Students' Writing Performance
ERIC Educational Resources Information Center
Yang, Luxin; Zhang, Ling
2010-01-01
This study examined the effectiveness of reformulation and model text in a three-stage writing task (composing-comparison-revising) in an EFL writing class in a Beijing university. The study documented 10 university students' writing performance from the composing (Stage 1) and comparing (Stage 2, where students compare their own text to a…
Vibration Based Sun Gear Damage Detection
NASA Technical Reports Server (NTRS)
Hood, Adrian; LaBerge, Kelsen; Lewicki, David; Pines, Darryll
2013-01-01
Seeded fault experiments were conducted on the planetary stage of an OH-58C helicopter transmission. Two vibration based methods are discussed that isolate the dynamics of the sun gear from that of the planet gears, bearings, input spiral bevel stage, and other components in and around the gearbox. Three damaged sun gears: two spalled and one cracked, serve as the focus of this current work. A non-sequential vibration separation algorithm was developed and the resulting signals analyzed. The second method uses only the time synchronously averaged data but takes advantage of the signal/source mapping required for vibration separation. Both algorithms were successful in identifying the spall damage. Sun gear damage was confirmed by the presence of sun mesh groups. The sun tooth crack condition was inconclusive.
NASA Technical Reports Server (NTRS)
Murphy, Patrick Charles
1985-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.
Supervised detection of exoplanets in high-contrast imaging sequences
NASA Astrophysics Data System (ADS)
Gomez Gonzalez, C. A.; Absil, O.; Van Droogenbroeck, M.
2018-06-01
Context. Post-processing algorithms play a key role in pushing the detection limits of high-contrast imaging (HCI) instruments. State-of-the-art image processing approaches for HCI enable the production of science-ready images relying on unsupervised learning techniques, such as low-rank approximations, for generating a model point spread function (PSF) and subtracting the residual starlight and speckle noise. Aims: In order to maximize the detection rate of HCI instruments and survey campaigns, advanced algorithms with higher sensitivities to faint companions are needed, especially for the speckle-dominated innermost region of the images. Methods: We propose a reformulation of the exoplanet detection task (for ADI sequences) that builds on well-established machine learning techniques to take HCI post-processing from an unsupervised to a supervised learning context. In this new framework, we present algorithmic solutions using two different discriminative models: SODIRF (random forests) and SODINN (neural networks). We test these algorithms on real ADI datasets from VLT/NACO and VLT/SPHERE HCI instruments. We then assess their performances by injecting fake companions and using receiver operating characteristic analysis. This is done in comparison with state-of-the-art ADI algorithms, such as ADI principal component analysis (ADI-PCA). Results: This study shows the improved sensitivity versus specificity trade-off of the proposed supervised detection approach. At the diffraction limit, SODINN improves the true positive rate by a factor ranging from 2 to 10 (depending on the dataset and angular separation) with respect to ADI-PCA when working at the same false-positive level. Conclusions: The proposed supervised detection framework outperforms state-of-the-art techniques in the task of discriminating planet signal from speckles. In addition, it offers the possibility of re-processing existing HCI databases to maximize their scientific return and potentially improve the demographics of directly imaged exoplanets.
Second-order Poisson Nernst-Planck solver for ion channel transport
Zheng, Qiong; Chen, Duan; Wei, Guo-Wei
2010-01-01
The Poisson Nernst-Planck (PNP) theory is a simplified continuum model for a wide variety of chemical, physical and biological applications. Its ability of providing quantitative explanation and increasingly qualitative predictions of experimental measurements has earned itself much recognition in the research community. Numerous computational algorithms have been constructed for the solution of the PNP equations. However, in the realistic ion-channel context, no second order convergent PNP algorithm has ever been reported in the literature, due to many numerical obstacles, including discontinuous coefficients, singular charges, geometric singularities, and nonlinear couplings. The present work introduces a number of numerical algorithms to overcome the abovementioned numerical challenges and constructs the first second-order convergent PNP solver in the ion-channel context. First, a Dirichlet to Neumann mapping (DNM) algorithm is designed to alleviate the charge singularity due to the protein structure. Additionally, the matched interface and boundary (MIB) method is reformulated for solving the PNP equations. The MIB method systematically enforces the interface jump conditions and achieves the second order accuracy in the presence of complex geometry and geometric singularities of molecular surfaces. Moreover, two iterative schemes are utilized to deal with the coupled nonlinear equations. Furthermore, extensive and rigorous numerical validations are carried out over a number of geometries, including a sphere, two proteins and an ion channel, to examine the numerical accuracy and convergence order of the present numerical algorithms. Finally, application is considered to a real transmembrane protein, the Gramicidin A channel protein. The performance of the proposed numerical techniques is tested against a number of factors, including mesh sizes, diffusion coefficient profiles, iterative schemes, ion concentrations, and applied voltages. Numerical predictions are compared with experimental measurements. PMID:21552336
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.
2003-01-01
A diagnostic tool for detecting damage to gears was developed. Two different measurement technologies, oil debris analysis and vibration were integrated into a health monitoring system for detecting surface fatigue pitting damage on gears. This integrated system showed improved detection and decision-making capabilities as compared to using individual measurement technologies. This diagnostic tool was developed and evaluated experimentally by collecting vibration and oil debris data from fatigue tests performed in the NASA Glenn Spur Gear Fatigue Rig. An oil debris sensor and the two vibration algorithms were adapted as the diagnostic tools. An inductance type oil debris sensor was selected for the oil analysis measurement technology. Gear damage data for this type of sensor was limited to data collected in the NASA Glenn test rigs. For this reason, this analysis included development of a parameter for detecting gear pitting damage using this type of sensor. The vibration data was used to calculate two previously available gear vibration diagnostic algorithms. The two vibration algorithms were selected based on their maturity and published success in detecting damage to gears. Oil debris and vibration features were then developed using fuzzy logic analysis techniques, then input into a multi sensor data fusion process. Results show combining the vibration and oil debris measurement technologies improves the detection of pitting damage on spur gears. As a result of this research, this new diagnostic tool has significantly improved detection of gear damage in the NASA Glenn Spur Gear Fatigue Rigs. This research also resulted in several other findings that will improve the development of future health monitoring systems. Oil debris analysis was found to be more reliable than vibration analysis for detecting pitting fatigue failure of gears and is capable of indicating damage progression. Also, some vibration algorithms are as sensitive to operational effects as they are to damage. Another finding was that clear threshold limits must be established for diagnostic tools. Based on additional experimental data obtained from the NASA Glenn Spiral Bevel Gear Fatigue Rig, the methodology developed in this study can be successfully implemented on other geared systems.
NASA Astrophysics Data System (ADS)
Del Pezzo, Edoardo; Bianco, Francesca
2013-04-01
The civil defense of Italy and the European community have planned to reformulate the volcanic risk in several volcanic areas of Italy, among which Mt. Vesuvius and Campi Flegrei, by taking into account the possible occurrence of damaging pre- or syn-eruptive seismic events. Necessary to achieve this goal is the detailed knowledge of the local attenuation-distance relations. In the present note, we make a survey of the estimates of seismic quality factor (the inverse is proportional to the attenuation coefficient with distance) reported in literature for the area of Campi Flegrei where many, but sometimes contradictory results have been published on this topic. We try to review these results in order to give indications for their correct use when calculating the attenuation laws for this area.
Non-damaging laser therapy of the macula: Titration algorithm and tissue response
NASA Astrophysics Data System (ADS)
Palanker, Daniel; Lavinsky, Daniel; Dalal, Roopa; Huie, Philip
2014-02-01
Retinal photocoagulation typically results in permanent scarring and scotomata, which limit its applicability to the macula, preclude treatments in the fovea, and restrict the retreatments. Non-damaging approaches to laser therapy have been tested in the past, but the lack of reliable titration and slow treatment paradigms limited their clinical use. We developed and tested a titration algorithm for sub-visible and non-damaging treatments of the retina with pulses sufficiently short to be used with pattern laser scanning. The algorithm based on Arrhenius model of tissue damage optimizes the power and duration for every energy level, relative to the threshold of lesion visibility established during titration (and defined as 100%). Experiments with pigmented rabbits established that lesions in the 50-75% energy range were invisible ophthalmoscopically, but detectable with Fluorescein Angiography and OCT, while at 30% energy there was only very minor damage to the RPE, which recovered within a few days. Patients with Diabetic Macular Edema (DME) and Central Serous Retinopathy (CSR) have been treated over the edematous areas at 30% energy, using 200μm spots with 0.25 diameter spacing. No signs of laser damage have been detected with any imaging modality. In CSR patients, subretinal fluid resolved within 45 days. In DME patients the edema decreased by approximately 150μm over 60 days. After 3-4 months some patients presented with recurrence of edema, and they responded well to retreatment with the same parameters, without any clinically visible damage. This pilot data indicates a possibility of effective and repeatable macular laser therapy below the tissue damage threshold.
Chilcoat, Howard D; Coplan, Paul M; Harikrishnan, Venkatesh; Alexander, Louis
2016-08-01
Doctor-shopping (obtaining prescriptions from multiple prescribers/pharmacies) for opioid analgesics produces a supply for diversion and abuse, and represents a major public health issue. An open cohort study assessed changes in doctor-shopping in the U.S. for a brand extended release (ER) oxycodone product (OxyContin) and comparator opioids before (July, 2009 to June, 2010) versus after (January, 2011 to June, 2013) introduction of reformulated brand ER oxycodone with abuse-deterrent properties, using IMS LRx longitudinal data covering >150 million patients and 65% of retail U.S. prescriptions. After its reformulation, the rate of doctor-shopping decreased 50% (for 2+ prescribers/3+ pharmacies) for brand ER oxycodone, but not for comparators. The largest decreases in rates occurred among young adults (73%), those paying with cash (61%) and those receiving the highest available dose (62%), with a 90% decrease when stratifying by all three characteristics. The magnitude of doctor-shopping reductions increased with increasing number of prescribers/pharmacies (e.g., 75% reduction for ≥2 prescribers/≥4 pharmacies). The rate of doctor-shopping for brand ER oxycodone decreased substantially after its reformulation, which did not occur for other prescription opioids. The largest reductions in doctor-shopping occurred with characteristics associated with higher abuse risk such as youth, cash payment and high dose, and with more specific thresholds of doctor-shopping. A higher prescriber and/or pharmacy threshold also increased the magnitude of the decrease, suggesting that it better captured the effect of the reformulation on actual doctor-shoppers. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Reformulating Non-Monotonic Theories for Inference and Updating
NASA Technical Reports Server (NTRS)
Grosof, Benjamin N.
1992-01-01
We aim to help build programs that do large-scale, expressive non-monotonic reasoning (NMR): especially, 'learning agents' that store, and revise, a body of conclusions while continually acquiring new, possibly defeasible, premise beliefs. Currently available procedures for forward inference and belief revision are exhaustive, and thus impractical: they compute the entire non-monotonic theory, then re-compute from scratch upon updating with new axioms. These methods are thus badly intractable. In most theories of interest, even backward reasoning is combinatoric (at least NP-hard). Here, we give theoretical results for prioritized circumscription that show how to reformulate default theories so as to make forward inference be selective, as well as concurrent; and to restrict belief revision to a part of the theory. We elaborate a detailed divide-and-conquer strategy. We develop concepts of structure in NM theories, by showing how to reformulate them in a particular fashion: to be conjunctively decomposed into a collection of smaller 'part' theories. We identify two well-behaved special cases that are easily recognized in terms of syntactic properties: disjoint appearances of predicates, and disjoint appearances of individuals (terms). As part of this, we also definitionally reformulate the global axioms, one by one, in addition to applying decomposition. We identify a broad class of prioritized default theories, generalizing default inheritance, for which our results especially bear fruit. For this asocially monadic class, decomposition permits reasoning to be localized to individuals (ground terms), and reduced to propositional. Our reformulation methods are implementable in polynomial time, and apply to several other NM formalisms beyond circumscription.
Industry Approach to Nutrition-Based Product Development and Reformulation in Asia.
Vlassopoulos, Antonis; Masset, Gabriel; Leroy, Fabienne; Spieldenner, Jörg
2015-01-01
In the recent years there has been a proliferation of initiatives to classify food products according to their nutritional composition (e.g., high in fat/sugar) to better guide consumer choices and regulate the food environment. This global trend, lately introduced in Asia as well, utilizes nutrient profiling (NP) to set compositional criteria for food products. Even though the use of NP to set targets for product reformulation has been proposed for years, to date only two NP systems have been specifically developed for that purpose. The majority of the NP applications, especially in Asia, focus on marketing and/or health claim regulation, as well as front-of-pack labeling. Product reformulation has been identified, by the World Health Organization and other official bodies, as a key tool for the food industry to help address public health nutrition priorities and provide support towards the reduction of excessive dietary sugar, salt and fats. In the United Kingdom, the Responsibility Deal is an excellent example of a public-private collaborative initiative that successfully reduced the salt content of products available in the supermarkets by 20-30%, resulting in an estimated 10% reduction in salt intake at the population level. Validation of NP systems targeted towards reformulation supports the hypothesis that, by adopting them, the industry can actively support existing policies in the direction of lowering consumptions in public health-sensitive nutrients. The symposium presented a discussion on the current NP landscape in Asia, the importance of reformulation for public health and the Nestlé approach to improve the food environment in Asia through NP.
ERIC Educational Resources Information Center
Dintzer, Leonard; Wortman, Camille B.
1978-01-01
The reformulated learned helplessness model of depression (Abramson, Seligman, Teasdale 1978) was examined. Argues that unless it is possible to specify the conditions under which a given attribution will be made, the model becomes circular and lacks predictive power. Discusses Abramson et al.'s suggestions for therapy and prevention. (Editor/RK)
ERIC Educational Resources Information Center
Bozlee, Brian J.
2007-01-01
The impact of raising Gibbs energy of the enzyme-substrate complex (G[subscript 3]) and the reformulation of the Michaelis-Menten equation are discussed. The maximum velocity of the reaction (v[subscript m]) and characteristic constant for the enzyme (K[subscript M]) will increase with increase in Gibbs energy, indicating that the rate of reaction…
ERIC Educational Resources Information Center
Liu, Ming-Chi; Huang, Yueh-Min; Kinshuk; Wen, Dunwei
2013-01-01
It is critical that students learn how to retrieve useful information in hypermedia environments, a task that is often especially difficult when it comes to image retrieval, as little text feedback is given that allows them to reformulate keywords they need to use. This situation may make students feel disorientated while attempting image…
ERIC Educational Resources Information Center
Rubilar, Álvaro Sebastián Bustos; Badillo, Gonzalo Zubieta
2017-01-01
In this article, we report how a geometric task based on the ACODESA methodology (collaborative learning, scientific debate and self-reflection) promotes the reformulation of the students' validations and allows revealing the students' aims in each of the stages of the methodology. To do so, we present the case of a team and, particularly, one of…
Dunlap, Eloise; Graves, Jennifer; Benoit, Ellen
2012-01-01
In recent years, numerous weather disasters have crippled many cities and towns across the United States of America. Such disasters present a unique opportunity for analyses of the disintegration and reformulation of drug markets. Disasters present new facts which cannot be “explained” by existing theories. Recent and continuing disasters present a radically different picture from that of police crack downs where market disruptions are carried out on a limited basis (both use and sales). Generally, users and sellers move to other locations and business continues as usual. The Katrina Disaster in 2005 offered a larger opportunity to understand the functioning and processes by which drug markets may or may not survive. This manuscript presents a paradigm which uses stages as a testable concept to scientifically examine the disintegration and reformulation of drug markets during disaster or crisis situations. It describes the specific processes – referred to as stages – which drug markets must go through in order to function and survive during and after a natural disaster. Prior to Hurricane Katrina, there had never before been a situation in which a drug market was struck by a disaster that forced its disintegration and reformulation.i PMID:22728093
How Abbott’s Fenofibrate Franchise Avoided Generic Competition
Downing, Nicholas S.; Ross, Joseph S.; Jackevicius, Cynthia A.; Krumholz, Harlan M.
2013-01-01
The ongoing debate concerning the efficacy of fenofibrate has overshadowed an important aspect of the drug’s history: Abbott, the maker of branded fenofibrate, has produced several bioequivalent reformulations, which dominate the market even though generic fenofibrate has been available for almost a decade. This continued use of branded formulations, which cost twice as much as generic versions of fenofibrate, imposes an annual cost of approximately $700 million on our healthcare system. Abbott maintained its dominance of the fenofibrate market, in part, through a complex switching strategy involving the sequential launch of branded reformulations that had not been shown to be superior to the first generation product and patent litigation that delayed the approval of generic formulations. The small differences in dose of the newer branded formulations prevented substitution with generics of older generation products. As soon as direct generic competition seemed likely at the new dose level where substitution would be allowed, Abbott would launch another reformulation and the cycle would repeat. Our objective, using the fenofibrate example, is to describe how current policy can allow pharmaceutical companies to maintain market share using reformulations of branded medications without demonstrating the superiority of next generation products. PMID:22493409
Avoidance of generic competition by Abbott Laboratories' fenofibrate franchise.
Downing, Nicholas S; Ross, Joseph S; Jackevicius, Cynthia A; Krumholz, Harlan M
2012-05-14
The ongoing debate concerning the efficacy of fenofibrate has overshadowed an important aspect of the drug's history: Abbott Laboratories, the maker of branded fenofibrate, has produced several bioequivalent reformulations that dominate the market, although generic fenofibrate has been available for almost a decade. This continued use of branded formulations, which cost twice as much as generic versions of fenofibrate, imposes an annual cost of approximately $700 million on the US health care system. Abbott Laboratories maintained its dominance of the fenofibrate market in part through a complex switching strategy involving the sequential launch of branded reformulations that had not been shown to be superior to the first-generation product and patent litigation that delayed the approval of generic formulations. The small differences in dose of the newer branded formulations prevented their substitution with generics of older-generation products. As soon as direct generic competition seemed likely at the new dose level, where substitution would be allowed, Abbott would launch another reformulation, and the cycle would repeat. Based on the fenofibrate example, our objective is to describe how current policy can allow pharmaceutical companies to maintain market share using reformulations of branded medications, without demonstrating the superiority of next-generation products.
NASA Astrophysics Data System (ADS)
Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong
2018-03-01
This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.
Competitive Facility Location with Fuzzy Random Demands
NASA Astrophysics Data System (ADS)
Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke
2010-10-01
This paper proposes a new location problem of competitive facilities, e.g. shops, with uncertainty and vagueness including demands for the facilities in a plane. By representing the demands for facilities as fuzzy random variables, the location problem can be formulated as a fuzzy random programming problem. For solving the fuzzy random programming problem, first the α-level sets for fuzzy numbers are used for transforming it to a stochastic programming problem, and secondly, by using their expectations and variances, it can be reformulated to a deterministic programming problem. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic oscillation. The efficiency of the proposed method is shown by applying it to numerical examples of the facility location problems.
A Boltzmann machine for the organization of intelligent machines
NASA Technical Reports Server (NTRS)
Moed, Michael C.; Saridis, George N.
1990-01-01
A three-tier structure consisting of organization, coordination, and execution levels forms the architecture of an intelligent machine using the principle of increasing precision with decreasing intelligence from a hierarchically intelligent control. This system has been formulated as a probabilistic model, where uncertainty and imprecision can be expressed in terms of entropies. The optimal strategy for decision planning and task execution can be found by minimizing the total entropy in the system. The focus is on the design of the organization level as a Boltzmann machine. Since this level is responsible for planning the actions of the machine, the Boltzmann machine is reformulated to use entropy as the cost function to be minimized. Simulated annealing, expanding subinterval random search, and the genetic algorithm are presented as search techniques to efficiently find the desired action sequence and illustrated with numerical examples.
NASA Astrophysics Data System (ADS)
Wang, Jia Jie; Wriedt, Thomas; Han, Yi Ping; Mädler, Lutz; Jiao, Yong Chang
2018-05-01
Light scattering of a radially inhomogeneous droplet, which is modeled by a multilayered sphere, is investigated within the framework of Generalized Lorenz-Mie Theory (GLMT), with particular efforts devoted to the analysis of the internal field distribution in the cases of shaped beam illumination. To circumvent numerical difficulties in the computation of internal field for an absorbing/non-absorbing droplet with pretty large size parameter, a recursive algorithm is proposed by reformulation of the equations for the expansion coefficients. Two approaches are proposed for the prediction of the internal field distribution, namely a rigorous method and an approximation method. The developed computer code is tested to be stable in a wide range of size parameters. Numerical computations are implemented to simulate the internal field distributions of a radially inhomogeneous droplet illuminated by a focused Gaussian beam.
Overview of the Phoenix Entry, Descent and Landing System Architecture
NASA Technical Reports Server (NTRS)
Grover, Myron R., III; Cichy, Benjamin D.; Desai, Prasun N.
2008-01-01
NASA s Phoenix Mars Lander began its journey to Mars from Cape Canaveral, Florida in August 2007, but its journey to the launch pad began many years earlier in 1997 as NASA s Mars Surveyor Program 2001 Lander. In the intervening years, the entry, descent and landing (EDL) system architecture went through a series of changes, resulting in the system flown to the surface of Mars on May 25th, 2008. Some changes, such as entry velocity and landing site elevation, were the result of differences in mission design. Other changes, including the removal of hypersonic guidance, the reformulation of the parachute deployment algorithm, and the addition of the backshell avoidance maneuver, were driven by constant efforts to augment system robustness. An overview of the Phoenix EDL system architecture is presented along with rationales driving these architectural changes.
Evolutionary Algorithms Approach to the Solution of Damage Detection Problems
NASA Astrophysics Data System (ADS)
Salazar Pinto, Pedro Yoajim; Begambre, Oscar
2010-09-01
In this work is proposed a new Self-Configured Hybrid Algorithm by combining the Particle Swarm Optimization (PSO) and a Genetic Algorithm (GA). The aim of the proposed strategy is to increase the stability and accuracy of the search. The central idea is the concept of Guide Particle, this particle (the best PSO global in each generation) transmits its information to a particle of the following PSO generation, which is controlled by the GA. Thus, the proposed hybrid has an elitism feature that improves its performance and guarantees the convergence of the procedure. In different test carried out in benchmark functions, reported in the international literature, a better performance in stability and accuracy was observed; therefore the new algorithm was used to identify damage in a simple supported beam using modal data. Finally, it is worth noting that the algorithm is independent of the initial definition of heuristic parameters.
NASA Technical Reports Server (NTRS)
Bakuckas, J. G.; Tan, T. M.; Lau, A. C. W.; Awerbuch, J.
1993-01-01
A finite element-based numerical technique has been developed to simulate damage growth in unidirectional composites. This technique incorporates elastic-plastic analysis, micromechanics analysis, failure criteria, and a node splitting and node force relaxation algorithm to create crack surfaces. Any combination of fiber and matrix properties can be used. One of the salient features of this technique is that damage growth can be simulated without pre-specifying a crack path. In addition, multiple damage mechanisms in the forms of matrix cracking, fiber breakage, fiber-matrix debonding and plastic deformation are capable of occurring simultaneously. The prevailing failure mechanism and the damage (crack) growth direction are dictated by the instantaneous near-tip stress and strain fields. Once the failure mechanism and crack direction are determined, the crack is advanced via the node splitting and node force relaxation algorithm. Simulations of the damage growth process in center-slit boron/aluminum and silicon carbide/titanium unidirectional specimens were performed. The simulation results agreed quite well with the experimental observations.
Leibig, Christian; Wachtler, Thomas; Zeck, Günther
2016-09-15
Unsupervised identification of action potentials in multi-channel extracellular recordings, in particular from high-density microelectrode arrays with thousands of sensors, is an unresolved problem. While independent component analysis (ICA) achieves rapid unsupervised sorting, it ignores the convolutive structure of extracellular data, thus limiting the unmixing to a subset of neurons. Here we present a spike sorting algorithm based on convolutive ICA (cICA) to retrieve a larger number of accurately sorted neurons than with instantaneous ICA while accounting for signal overlaps. Spike sorting was applied to datasets with varying signal-to-noise ratios (SNR: 3-12) and 27% spike overlaps, sampled at either 11.5 or 23kHz on 4365 electrodes. We demonstrate how the instantaneity assumption in ICA-based algorithms has to be relaxed in order to improve the spike sorting performance for high-density microelectrode array recordings. Reformulating the convolutive mixture as an instantaneous mixture by modeling several delayed samples jointly is necessary to increase signal-to-noise ratio. Our results emphasize that different cICA algorithms are not equivalent. Spike sorting performance was assessed with ground-truth data generated from experimentally derived templates. The presented spike sorter was able to extract ≈90% of the true spike trains with an error rate below 2%. It was superior to two alternative (c)ICA methods (≈80% accurately sorted neurons) and comparable to a supervised sorting. Our new algorithm represents a fast solution to overcome the current bottleneck in spike sorting of large datasets generated by simultaneous recording with thousands of electrodes. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Lombaerts, Thomas; Schuet, Stefan R.; Wheeler, Kevin; Acosta, Diana; Kaneshige, John
2013-01-01
This paper discusses an algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. Starting with an optimal control formulation, the optimization problem can be rewritten as a Hamilton- Jacobi-Bellman equation. This equation can be solved by level set methods. This approach has been applied on an aircraft example involving structural airframe damage. Monte Carlo validation tests have confirmed that this approach is successful in estimating the safe maneuvering envelope for damaged aircraft.
NASA Astrophysics Data System (ADS)
Borgelt, Christian
In clustering we often face the situation that only a subset of the available attributes is relevant for forming clusters, even though this may not be known beforehand. In such cases it is desirable to have a clustering algorithm that automatically weights attributes or even selects a proper subset. In this paper I study such an approach for fuzzy clustering, which is based on the idea to transfer an alternative to the fuzzifier (Klawonn and Höppner, What is fuzzy about fuzzy clustering? Understanding and improving the concept of the fuzzifier, In: Proc. 5th Int. Symp. on Intelligent Data Analysis, 254-264, Springer, Berlin, 2003) to attribute weighting fuzzy clustering (Keller and Klawonn, Int J Uncertain Fuzziness Knowl Based Syst 8:735-746, 2000). In addition, by reformulating Gustafson-Kessel fuzzy clustering, a scheme for weighting and selecting principal axes can be obtained. While in Borgelt (Feature weighting and feature selection in fuzzy clustering, In: Proc. 17th IEEE Int. Conf. on Fuzzy Systems, IEEE Press, Piscataway, NJ, 2008) I already presented such an approach for a global selection of attributes and principal axes, this paper extends it to a cluster-specific selection, thus arriving at a fuzzy subspace clustering algorithm (Parsons, Haque, and Liu, 2004).
Generic Entry, Reformulations, and Promotion of SSRIs
Donohue, Julie M.; Koss, Catherine; Berndt, Ernst R.; Frank, Richard G.
2009-01-01
Background Previous research has shown that a manufacturer’s promotional strategy for a brand-name drug is typically affected by generic entry. However, little is known about how newer strategies to extend patent life, including product reformulation introduction or obtaining approval to market for additional clinical indications, influence promotion. Objective To examine the relationship between promotional expenditures, generic entry, reformulation entry, and new indication approval. Study Design/Setting We used quarterly data on national product-level promotional spending (including expenditures for physician detailing and direct-to-consumer advertising (DTCA), and the retail value of free samples distributed in physician offices) for selective serotonin reuptake inhibitors (SSRIs) over the period 1997 through 2004. We estimated econometric models of detailing, DTCA, and total quarterly promotional expenditures as a function of the timing of generic entry, entry of new product formulations, and Food and Drug Administration (FDA) approval for new clinical indications for existing medications in the SSRI class. Main Outcome Measure Expenditures by pharmaceutical manufacturers for promotion of antidepressant medications. Results Over the period 1997–2004, there was considerable variation in the composition of promotional expenditures across the SSRIs. Promotional expenditures for the original brand molecule decreased dramatically when a reformulation of the molecule was introduced. Promotional spending (both total and detailing alone) for a specific molecule was generally lower after generic entry than before, although the effect of generic entry on promotional spending appears to be closely linked with the choice of product reformulation strategy pursued by the manufacturer. Detailing expenditures for Paxil were increased after the manufacturer received FDA approval to market the drug for generalized anxiety disorder (GAD), while the likelihood of DTCA outlays for the drug was not changed. In contrast, FDA approval to market Paxil and Zoloft for social anxiety disorder (SAD) did not affect the manufacturers’ detailing expenditures but did result in a greater likelihood of DTCA outlays. Conclusion The introduction of new product formulations appears to be a common strategy for attempting to extend market exclusivity for medications facing impending generic entry. Manufacturers that introduced a reformulation before generic entry shifted most promotion dollars from the original brand to the reformulation long before generic entry, and in some cases manufacturers appeared to target a particular promotion type for a given indication. Given the significant impact pharmaceutical promotion has on demand for prescription drugs, these findings have important implications for prescription drug spending and public health. PMID:18563951
Post-hurricane forest damage assessment using satellite remote sensing
W. Wang; J.J. Qu; X. Hao; Y. Liu; J.A. Stanturf
2010-01-01
This study developed a rapid assessment algorithm for post-hurricane forest damage estimation using moderate resolution imaging spectroradiometer (MODIS) measurements. The performance of five commonly used vegetation indices as post-hurricane forest damage indicators was investigated through statistical analysis. The Normalized Difference Infrared Index (NDII) was...
ERIC Educational Resources Information Center
Santos, Maria; Lopez-Serrano, Sonia; Manchon, Rosa M.
2010-01-01
Framed in a cognitively-oriented strand of research on corrective feedback (CF) in SLA, the controlled three-stage (composition/comparison-noticing/revision) study reported in this paper investigated the effects of two forms of direct CF (error correction and reformulation) on noticing and uptake, as evidenced in the written output produced by a…
Buckley, Nicholas A.; Degenhardt, Louisa; Larance, Briony; Cairns, Rose; Dobbins, Timothy A.; Pearson, Sallie-Anne
2018-01-01
BACKGROUND: Australia introduced tamper-resistant controlled-release (CR) oxycodone in April 2014. We quantified the impact of the reformulation on dispensing, switching and poisonings. METHODS: We performed interrupted time-series analyses using population-representative national dispensing data from 2012 to 2016. We measured dispensing of oxycodone CR (≥ 10 mg), discontinuation of use of strong opioids and switching to other strong opioids after the reformulation compared with a historical control period. Similarly, we compared calls about intentional opioid poisoning using data from a regional poisons information centre. RESULTS: After the reformulation, dispensing decreased for 10–30 mg (total level shift −11.1%, 95% confidence interval [CI], −17.2% to −4.6%) and 40–80 mg oxycodone CR (total level shift −31.5%, 95% CI −37.5% to −24.9%) in participants less than 65 years of age but was unchanged in people 65 years of age or older. Compared with the previous year, discontinuation of use of strong opioids did not increase (adjusted hazard ratio [HR] 0.95, 95% CI 0.91 to 1.00), but switching to oxycodone/naloxone did increase (adjusted HR 1.54, 95% CI 1.32 to 1.79). Switching to morphine varied by age (p < 0.001), and the greatest increase was in participants less than 45 years of age (adjusted HR 4.33, 95% CI 2.13 to 8.80). Participants switching after the reformulation were more likely to be dispensed a tablet strength of 40 mg or more (adjusted odds ratio [OR] 1.40, 95% CI 1.09 to 1.79). Calls for intentional poisoning that involved oxycodone taken orally increased immediately after the reformulation (incidence rate ratio (IRR) 1.31, 95% CI 1.05–1.64), but there was no change for injected oxycodone. INTERPRETATION: The reformulation had a greater impact on opioid access patterns of people less than 65 years of age who were using higher strengths of oxycodone CR. This group has been identified as having an increased risk of problematic opioid use and warrants closer monitoring in clinical practice. PMID:29581162
Murteira, Susana; Millier, Aurélie; Ghezaiel, Zied; Lamure, Michel
2014-01-01
Background Repurposing has become a mainstream strategy in drug development, but it faces multiple challenges, amongst them the increasing and ever changing regulatory framework. This is the second study of a series of three-part publication project with the ultimate goal of understanding the market access rationale and conditions attributed to drug repurposing in the United States and in Europe. The aim of the current study to evaluate the regulatory path associated with each type of repurposing strategy according to the previously proposed nomenclature in the first article of this series. Methods From the cases identified, a selection process retrieved a total of 141 case studies in all countries, harmonized for data availability and common approval in the United States and in Europe. Regulatory information for each original and repurposed drug product was extracted, and several related regulatory attributes were also extracted such as, designation change and filing before or after patent expiry, among others. Descriptive analyses were conducted to determine trends and to investigate potential associations between the different regulatory paths and attributes of interest, for reformulation and repositioning cases separately. Results Within the studied European countries, most of the applications for reformulated products were filed through national applications. In contrast, for repositioned products, the centralized procedure was the most frequent regulatory pathway. Most of the repurposing cases were approved before patent expiry, and those cases have followed more complex regulatory pathways in the United States and in Europe. For new molecular entities filed in the United States, a similar number of cases were developed by serendipity and by a hypothesis-driven approach. However, for the new indication's regulatory pathway in the United States, most of the cases were developed through a hypothesis-driven approach. Conclusion The regulations in the United States and in Europe for drug repositionings and reformulations allowed confirming that repositioning strategies were usually filed under a more complex regulatory process than reformulations. Also, it seems that parameters such as patent expiry and type of repositioning approach or reformulation affect the regulatory pathways chosen for each case. PMID:27226839
Schaffer, Andrea L; Buckley, Nicholas A; Degenhardt, Louisa; Larance, Briony; Cairns, Rose; Dobbins, Timothy A; Pearson, Sallie-Anne
2018-03-26
Australia introduced tamper-resistant controlled-release (CR) oxycodone in April 2014. We quantified the impact of the reformulation on dispensing, switching and poisonings. We performed interrupted time-series analyses using population-representative national dispensing data from 2012 to 2016. We measured dispensing of oxycodone CR (≥ 10 mg), discontinuation of use of strong opioids and switching to other strong opioids after the reformulation compared with a historical control period. Similarly, we compared calls about intentional opioid poisoning using data from a regional poisons information centre. After the reformulation, dispensing decreased for 10-30 mg (total level shift -11.1%, 95% confidence interval [CI], -17.2% to -4.6%) and 40-80 mg oxycodone CR (total level shift -31.5%, 95% CI -37.5% to -24.9%) in participants less than 65 years of age but was unchanged in people 65 years of age or older. Compared with the previous year, discontinuation of use of strong opioids did not increase (adjusted hazard ratio [HR] 0.95, 95% CI 0.91 to 1.00), but switching to oxycodone/naloxone did increase (adjusted HR 1.54, 95% CI 1.32 to 1.79). Switching to morphine varied by age ( p < 0.001), and the greatest increase was in participants less than 45 years of age (adjusted HR 4.33, 95% CI 2.13 to 8.80). Participants switching after the reformulation were more likely to be dispensed a tablet strength of 40 mg or more (adjusted odds ratio [OR] 1.40, 95% CI 1.09 to 1.79). Calls for intentional poisoning that involved oxycodone taken orally increased immediately after the reformulation (incidence rate ratio (IRR) 1.31, 95% CI 1.05-1.64), but there was no change for injected oxycodone. The reformulation had a greater impact on opioid access patterns of people less than 65 years of age who were using higher strengths of oxycodone CR. This group has been identified as having an increased risk of problematic opioid use and warrants closer monitoring in clinical practice. © 2018 Joule Inc. or its licensors.
NASA Astrophysics Data System (ADS)
Gerist, Saleheh; Maheri, Mahmoud R.
2016-12-01
In order to solve structural damage detection problem, a multi-stage method using particle swarm optimization is presented. First, a new spars recovery method, named Basis Pursuit (BP), is utilized to preliminarily identify structural damage locations. The BP method solves a system of equations which relates the damage parameters to the structural modal responses using the sensitivity matrix. Then, the results of this stage are subsequently enhanced to the exact damage locations and extents using the PSO search engine. Finally, the search space is reduced by elimination of some low damage variables using micro search (MS) operator embedded in the PSO algorithm. To overcome the noise present in structural responses, a method known as Basis Pursuit De-Noising (BPDN) is also used. The efficiency of the proposed method is investigated by three numerical examples: a cantilever beam, a plane truss and a portal plane frame. The frequency response is used to detect damage in the examples. The simulation results demonstrate the accuracy and efficiency of the proposed method in detecting multiple damage cases and exhibit its robustness regarding noise and its advantages compared to other reported solution algorithms.
Zehrer, Cindy L; Holm, David; Solfest, Staci E; Walters, Shelley-Ann
2014-12-01
This study compared moisture vapour transmission rate (MVTR) and wear time or fluid-handling capacities of six adhesive foam dressings to a reformulated control dressing. Standardised in vitro MVTR methodology and a previously published in vivo artificial wound model (AWM) were used. Mean inverted MVTR for the reformulated dressing was 12 750 g/m(2) /24 hours and was significantly higher than four of the six comparator dressings (P < 0·0001), which ranged from 830 to 11 360 g/m(2) /24 hours. Mean upright MVTR for the reformulated dressing was 980 g/m(2) /24 hours and was significantly different than all of the comparator dressings (P < 0·0001), which ranged from 80 to 1620 g/m(2) /24 hours (three higher/three lower). The reformulated dressing median wear time ranged from 6·1 to >7·0 days, compared with 1·0 to 3·5 days for the comparator dressings (P = 0·0012 to P < 0·0001). The median fluid volume handled ranged from 78·0 to >87 ml compared with 13·0 to 44·5 ml for the comparator dressings (P = 0·0007 to P < 0·001). Interestingly, inverted MVTR did not correspond well to the AWM. These results suggest that marked differences exist between the dressings in terms of both MVTR and wear time or fluid-handling capacity. Furthermore, high inverted MVTR does not necessarily predict longer wear time or fluid-handling capacities of absorbent dressings. © 2013 The Authors. International Wound Journal © 2013 Medicalhelplines.com Inc and John Wiley & Sons Ltd.
Energy compensation following consumption of sugar-reduced products: a randomized controlled trial.
Markey, Oonagh; Le Jeune, Julia; Lovegrove, Julie A
2016-09-01
Consumption of sugar-reformulated products (commercially available foods and beverages that have been reduced in sugar content through reformulation) is a potential strategy for lowering sugar intake at a population level. The impact of sugar-reformulated products on body weight, energy balance (EB) dynamics and cardiovascular disease risk indicators has yet to be established. The REFORMulated foods (REFORM) study examined the impact of an 8-week sugar-reformulated product exchange on body weight, EB dynamics, blood pressure, arterial stiffness, glycemia and lipemia. A randomized, controlled, double-blind, crossover dietary intervention study was performed with fifty healthy normal to overweight men and women (age 32.0 ± 9.8 year, BMI 23.5 ± 3.0 kg/m(2)) who were randomly assigned to consume either regular sugar or sugar-reduced foods and beverages for 8 weeks, separated by 4-week washout period. Body weight, energy intake (EI), energy expenditure and vascular markers were assessed at baseline and after both interventions. We found that carbohydrate (P < 0.001), total sugars (P < 0.001) and non-milk extrinsic sugars (P < 0.001) (% EI) were lower, whereas fat (P = 0.001) and protein (P = 0.038) intakes (% EI) were higher on the sugar-reduced than the regular diet. No effects on body weight, blood pressure, arterial stiffness, fasting glycemia or lipemia were observed. Consumption of sugar-reduced products, as part of a blinded dietary exchange for an 8-week period, resulted in a significant reduction in sugar intake. Body weight did not change significantly, which we propose was due to energy compensation.
Cassidy, Theresa A; Thorley, Eileen; Black, Ryan A; DeVeaugh-Geiss, Angela; Butler, Stephen F; Coplan, Paul
To examine abuse prevalence for OxyContin and comparator opioids over a 6-year period prior to and following market entry of reformulated OxyContin and assess consistency in abuse across treatment settings and geographic regions. An observational study examining longitudinal changes using cross-sectional data from treatment centers for substance use disorder. A total of 874 facilities in 39 states in the United States within the National Addictions Vigilance Intervention and Prevention Program (NAVIPPRO®) surveillance system. Adults (72,060) assessed for drug problems using the Addiction Severity Index-Multimedia Version (ASI-MV®) from January 2009 through December 2015 who abused prescription opioids. Percent change in past 30-day abuse. OxyContin had significantly lower abuse 5 years after reformulation compared to levels for original OxyContin. Consistency of magnitude in OxyContin abuse reductions across geographic regions, ranging from 41 to 52 percent with differences in abuse reductions in treatment setting categories occurred. Changes in geographic region and treatment settings across study years did not bias the estimate of lower OxyContin abuse through confounding. In the postmarket setting, limitations and methodologic challenges in abuse measurement exist and it is difficult to isolate singular impacts of any one intervention given the complexity of prescription opioid abuse. Expectations for a reasonable threshold of abuse for any one ADF product or ADF opioids as a class are still uncertain and undefined. A significant decline in abuse prevalence of reformulated OxyContin was observed 5 years after its reformulation among this treatment sample of individuals assessed for substance use disorder that was lower historically for the original formulation of this product.
Alander, Timo J A; Leskinen, Ari P; Raunemaa, Taisto M; Rantanen, Leena
2004-05-01
Diesel exhaust particles are the major constituent of urban carbonaceous aerosol being linked to a large range of adverse environmental and health effects. In this work, the effects of fuel reformulation, oxidation catalyst, engine type, and engine operation parameters on diesel particle emission characteristics were investigated. Particle emissions from an indirect injection (IDI) and a direct injection (DI) engine car operating under steady-state conditions with a reformulated low-sulfur, low-aromatic fuel and a standard-grade fuel were analyzed. Organic (OC) and elemental (EC) carbon fractions of the particles were quantified by a thermal-optical transmission analysis method and particle size distributions measured with a scanning mobility particle sizer (SMPS). The particle volatility characteristics were studied with a configuration that consisted of a thermal desorption unit and an SMPS. In addition, the volatility of size-selected particles was determined with a tandem differential mobility analyzer technique. The reformulated fuel was found to produce 10-40% less particulate carbon mass compared to the standard fuel. On the basis of the carbon analysis, the organic carbon contributed 27-61% to the carbon mass of the IDI engine particle emissions, depending on the fuel and engine operation parameters. The fuel reformulation reduced the particulate organic carbon emissions by 10-55%. In the particles of the DI engine, the organic carbon contributed 14-26% to the total carbon emissions, the advanced engine technology, and the oxidation catalyst, thus reducing the OC/EC ratio of particles considerably. A relatively good consistency between the particulate organic fraction quantified with the thermal optical method and the volatile fraction measured with the thermal desorption unit and SMPS was found.
Estimating Impacts of Diesel Fuel Reformulation with Vector-based Blending
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadder, G.R.
2003-01-23
The Oak Ridge National Laboratory Refinery Yield Model has been used to study the refining cost, investment, and operating impacts of specifications for reformulated diesel fuel (RFD) produced in refineries of the U.S. Midwest in summer of year 2010. The study evaluates different diesel fuel reformulation investment pathways. The study also determines whether there are refinery economic benefits for producing an emissions reduction RFD (with flexibility for individual property values) compared to a vehicle performance RFD (with inflexible recipe values for individual properties). Results show that refining costs are lower with early notice of requirements for RFD. While advanced desulfurizationmore » technologies (with low hydrogen consumption and little effect on cetane quality and aromatics content) reduce the cost of ultra low sulfur diesel fuel, these technologies contribute to the increased costs of a delayed notice investment pathway compared to an early notice investment pathway for diesel fuel reformulation. With challenging RFD specifications, there is little refining benefit from producing emissions reduction RFD compared to vehicle performance RFD. As specifications become tighter, processing becomes more difficult, blendstock choices become more limited, and refinery benefits vanish for emissions reduction relative to vehicle performance specifications. Conversely, the emissions reduction specifications show increasing refinery benefits over vehicle performance specifications as specifications are relaxed, and alternative processing routes and blendstocks become available. In sensitivity cases, the refinery model is also used to examine the impact of RFD specifications on the economics of using Canadian synthetic crude oil. There is a sizeable increase in synthetic crude demand as ultra low sulfur diesel fuel displaces low sulfur diesel fuel, but this demand increase would be reversed by requirements for diesel fuel reformulation.« less
Efficient Reformulation of the Thermoelastic Higher-order Theory for Fgms
NASA Technical Reports Server (NTRS)
Bansal, Yogesh; Pindera, Marek-Jerzy; Arnold, Steven M. (Technical Monitor)
2002-01-01
Functionally graded materials (FGMs) are characterized by spatially variable microstructures which are introduced to satisfy given performance requirements. The microstructural gradation gives rise to continuously or discretely changing material properties which complicate FGM analysis. Various techniques have been developed during the past several decades for analyzing traditional composites and many of these have been adapted for the analysis of FGMs. Most of the available techniques use the so-called uncoupled approach in order to analyze graded structures. These techniques ignore the effect of microstructural gradation by employing specific spatial material property variations that are either assumed or obtained by local homogenization. The higher-order theory for functionally graded materials (HOTFGM) is a coupled approach developed by Aboudi et al. (1999) which takes the effect of microstructural gradation into consideration and does not ignore the local-global interaction of the spatially variable inclusion phase(s). Despite its demonstrated utility, however, the original formulation of the higher-order theory is computationally intensive. Herein, an efficient reformulation of the original higher-order theory for two-dimensional elastic problems is developed and validated. The use of the local-global conductivity and local-global stiffness matrix approach is made in order to reduce the number of equations involved. In this approach, surface-averaged quantities are the primary variables which replace volume-averaged quantities employed in the original formulation. The reformulation decreases the size of the global conductivity and stiffness matrices by approximately sixty percent. Various thermal, mechanical, and combined thermomechanical problems are analyzed in order to validate the accuracy of the reformulated theory through comparison with analytical and finite-element solutions. The presented results illustrate the efficiency of the reformulation and its advantages in analyzing functionally graded materials.
Degenhardt, Louisa; Bruno, Raimondo; Ali, Robert; Lintzeris, Nicholas; Farrell, Michael; Larance, Briony
2015-06-01
There is increasing concern about tampering of pharmaceutical opioids. We describe early findings from an Australian study examining the potential impact of the April 2014 introduction of an abuse-deterrent sustained-release oxycodone formulation (Reformulated OxyContin(®)). Data on pharmaceutical opioid sales; drug use by people who inject drugs regularly (PWID); client visits to the Sydney Medically Supervised Injecting Centre (MSIC); and last drug injected by clients of inner-Sydney needle-syringe programmes (NSPs) were obtained, 2009-2014. A cohort of n=606 people tampering with pharmaceutical opioids was formed pre-April 2014, and followed up May-August 2014. There were declines in pharmacy sales of 80mg OxyContin(®) post-introduction of the reformulated product, the dose most commonly diverted and injected by PWID. Reformulated OxyContin(®) was among the least commonly used and injected drugs among PWID. This was supported by Sydney NSP data. There was a dramatic reduction in MSIC visits for injection of OxyContin(®) post-introduction of the new formulation (from 62% of monthly visits pre-introduction to 5% of visits, August 2014). The NOMAD cohort confirmed a reduction in OxyContin(®) use/injection post-introduction. Reformulated OxyContin(®) was cheaper and less attractive for tampering than Original OxyContin(®). These data suggest that, in the short term, introduction of an abuse-deterrent formulation of OxyContin(®) in Australia was associated with a reduction in injection of OxyContin(®), with no clear switch to other drugs. Reformulated OxyContin(®), in this short follow-up, does not appear to be considered as attractive for tampering. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Peripheral intravenous and central catheter algorithm: a proactive quality initiative.
Wilder, Kerry A; Kuehn, Susan C; Moore, James E
2014-12-01
Peripheral intravenous (PIV) infiltrations causing tissue damage is a global issue surrounded by situations that make vascular access decisions difficult. The purpose of this quality improvement project was to develop an algorithm and assess its effectiveness in reducing PIV infiltrations in neonates. The targeted subjects were all infants in our neonatal intensive care unit (NICU) with a PIV catheter. We completed a retrospective chart review of the electronic medical record to collect 4th quarter 2012 baseline data. Following adoption of the algorithm, we also performed a daily manual count of all PIV catheters in the 1st and 2nd quarters 2013. Daily PIV days were defined as follows: 1 patient with a PIV catheter equals 1 PIV day. An infant with 2 PIV catheters in place was counted as 2 PIV days. Our rate of infiltration or tissue damage was determined by counting the number of events and dividing by the number of PIV days. The rate of infiltration or tissue damage was reported as the number of events per 100 PIV days. The number of infiltrations and PIV catheters was collected from the electronic medical record and also verified manually by daily assessment after adoption of the algorithm. To reduce the rate of PIV infiltrations leading to grade 4 infiltration and tissue damage by at least 30% in the NICU population. Incidence of PIV infiltrations/100 catheter days. The baseline rate for total infiltrations increased slightly from 5.4 to 5.68/100 PIV days (P = .397) for the NICU. We attributed this increase to heightened awareness and better reporting. Grade 4 infiltrations decreased from 2.8 to 0.83/100 PIV catheter days (P = .00021) after the algorithm was implemented. Tissue damage also decreased from 0.68 to 0.3/100 PIV days (P = .11). Statistical analysis used the Fisher exact test and reported as statistically significant at P < .05. Our findings suggest that utilization of our standardized decision pathway was instrumental in providing guidance for problem solving related to vascular access decisions. We feel this contributed to the overall reduction in grade 4 intravenous infiltration and tissue damage rates. Grade 4 infiltration reductions were highly statistically significant (P = .00021).
Rotenberg, Ken J; Costa, Paula; Trueman, Mark; Lattimore, Paul
2012-08-01
The study tested the Reformulated Helplessness model that individuals who show combined internal locus of control, high stability and high globality attributions for negative life events are prone to depression. Thirty-six women (M=29 years-8 months of age) receiving clinical treatment for eating disorders completed: the Attribution Style Questionnaire, the Beck Depression Inventory, and the Stirling Eating Disorder Scales. An HRA yielded a three-way interaction among the attributional dimensions on depressive symptoms. Plotting of the slopes showed that the attribution of negative life events to the combination of internal locus of control, high stability, and a high globality, was associated with the optimal level of depressive symptoms. The findings supported the Reformulated Helplessness as a model of depression. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Role of Reformulation in the Automatic Design of Satisfiability Procedures
NASA Technical Reports Server (NTRS)
VanBaalen, Jeffrey
1992-01-01
Recently there has been increasing interest in the problem of knowledge compilation (Selman & Kautz91). This is the problem of identifying tractable techniques for determining the consequences of a knowledge base. We have developed and implemented a technique, called DRAT, that given a theory, i.e., a collection of firstorder clauses, can often produce a type of decision procedure for that theory that can be used in the place of a general-purpose first-order theorem prover for determining many of the consequences of that theory. Hence, DRAT does a type of knowledge compilation. Central to the DRAT technique is a type of reformulation in which a problem's clauses are restated in terms of different nonlogical symbols. The reformulation is isomorphic in the sense that it does not change the semantics of a problem.
Newtonian Gravity Reformulated
NASA Astrophysics Data System (ADS)
Dehnen, H.
2018-01-01
With reference to MOND we propose a reformulation of Newton's theory of gravity in the sense of the static electrodynamics introducing a "material" quantity in analogy to the dielectric "constant". We propose that this quantity is induced by vacuum polarizations generated by the gravitational field itself. Herewith the flat rotation curves of the spiral galaxies can be explained as well as the observed high velocities near the center of the galaxy should be reconsidered.
De Keukeleire, Steven; Desmet, Stefanie; Lagrou, Katrien; Oosterlynck, Julie; Verhulst, Manon; Van Besien, Jessica; Saegeman, Veroniek; Reynders, Marijke
2017-03-01
The performance of Elecsys Syphilis was compared to Architect Syphilis TP and Reformulated Architect Syphilis TP. The overall sensitivity and specificity were 98.4% and 99.5%, 97.7% and 97.1%, and 99.2% and 99.7% respectively. The assays are comparable and considered adequate for syphilis screening. Copyright © 2016 Elsevier Inc. All rights reserved.
Reformulated gasoline deal with Venezuela draws heat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Begley, R.
A fight is brewing in Congress over a deal to let Venezuela off the hook in complying with the Clean Air Act reformulated gasoline rule. When Venezuela threatened to call for a GATT panel to challenge the rule as a trade barrier, the Clinton Administration negotiated to alter the rule, a deal that members of Congress are characterizing as {open_quotes}secret{close_quotes} and {open_quotes}back door.{close_quotes}
Reformulation of Stmerin(®) D CFC formulation using HFA propellants.
Murata, Saburo; Izumi, Takashi; Ito, Hideki
2013-01-01
Stmerin(®) D was reformulated using hydrofluoroalkanes (HFA-134a and HFA-227) as alternative propellants instead of chlorofluorocarbons (CFCs), where the active ingredients were suspended in mixed CFCs (CFC-11/CFC-12/CFC-114). Here, we report the suspension stability and spray performance of the original CFC formulation and a reformulation using HFAs. We prepared metered dose inhalers (MDI) using HFAs with different surfactants and co-solvents, and investigated the effect on suspension stability by visual testing. We found that the drug suspension stability was poor in both HFAs, but was improved, particularly for HFA-227, by adding a middle chain fatty acid triglycerides (MCT) to the formulation. However, the vapor pressure of HFA-227 is higher than a CFC mixture and this increased the fine particle dose (FPD). Spray performance was adjusted by altering the actuator configuration, and the performance of different actuators was tested by cascade impaction. We found the spray performance could be controlled by the configuration of the actuator. A spray performance comparable to the original formulation was obtained with a 0.8 mm orifice diameter and a 90° cone angle. These results demonstrate that the reformulation of Stmerin(®) D using HFA-227 is feasible, by using MCT as a suspending agent and modifying the actuator configuration.
Product reformulation in the food system to improve food safety. Evaluation of policy interventions.
Marotta, Giuseppe; Simeone, Mariarosaria; Nazzaro, Concetta
2014-03-01
The objective of this study is to understand the level of attention that the consumer awards to a balanced diet and to product ingredients, with a twofold purpose: to understand whether food product reformulation can generate a competitive advantage for companies that practice it and to evaluate the most appropriate policy interventions to promote a healthy diet. Reformulation strategy, in the absence of binding rules, could be generated by consumers. Results from qualitative research and from empirical analysis have shown that the question of health is a latent demand influenced by two main factors: a general lack of information, and the marketing strategies adopted by companies which bring about an increase in the information asymmetry between producers and consumers. In the absence of binding rules, it is therefore necessary that the government implement information campaigns (food education) aimed at increasing knowledge regarding the effects of unhealthy ingredients, in order to inform and improve consumer choice. It is only by means of widespread information campaigns that food product reformulation can become a strategic variable and allow companies to gain a competitive advantage. This may lead to virtuous results in terms of reducing the social costs related to an unhealthy diet. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xie, Fengle; Jiang, Zhansi; Jiang, Hui
2018-05-01
This paper presents a multi-damages identification method for Cantilever Beam. First, the damage location is identified by using the mode shape curvatures. Second, samples of varying damage severities at the damage location and their corresponding natural frequencies are used to construct the initial Kriging surrogate model. Then a particle swarm optimization (PSO) algorithm is employed to identify the damage severities based on Kriging surrogate model. The simulation study of a double-damaged cantilever beam demonstrated that the proposed method is effective.
Automated segmentation of comet assay images using Gaussian filtering and fuzzy clustering.
Sansone, Mario; Zeni, Olga; Esposito, Giovanni
2012-05-01
Comet assay is one of the most popular tests for the detection of DNA damage at single cell level. In this study, an algorithm for comet assay analysis has been proposed, aiming to minimize user interaction and providing reproducible measurements. The algorithm comprises two-steps: (a) comet identification via Gaussian pre-filtering and morphological operators; (b) comet segmentation via fuzzy clustering. The algorithm has been evaluated using comet images from human leukocytes treated with a commonly used DNA damaging agent. A comparison of the proposed approach with a commercial system has been performed. Results show that fuzzy segmentation can increase overall sensitivity, giving benefits in bio-monitoring studies where weak genotoxic effects are expected.
Joint histogram-based cost aggregation for stereo matching.
Min, Dongbo; Lu, Jiangbo; Do, Minh N
2013-10-01
This paper presents a novel method for performing efficient cost aggregation in stereo matching. The cost aggregation problem is reformulated from the perspective of a histogram, giving us the potential to reduce the complexity of the cost aggregation in stereo matching significantly. Differently from previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy that exists among the search range, caused by a repeated filtering for all the hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The tradeoff between accuracy and complexity is extensively investigated by varying the parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity and outperforms existing local methods. This paper also provides new insights into complexity-constrained stereo-matching algorithm design.
Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang
2018-06-01
Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.
Advances of the smooth variable structure filter: square-root and two-pass formulations
NASA Astrophysics Data System (ADS)
Gadsden, S. Andrew; Lee, Andrew S.
2017-01-01
The smooth variable structure filter (SVSF) has seen significant development and research activity in recent years. It is based on sliding mode concepts, which utilize a switching gain that brings an inherent amount of stability to the estimation process. In an effort to improve upon the numerical stability of the SVSF, a square-root formulation is derived. The square-root SVSF is based on Potter's algorithm. The proposed formulation is computationally more efficient and reduces the risks of failure due to numerical instability. The new strategy is applied on target tracking scenarios for the purposes of state estimation, and the results are compared with the popular Kalman filter. In addition, the SVSF is reformulated to present a two-pass smoother based on the SVSF gain. The proposed method is applied on an aerospace flight surface actuator, and the results are compared with the Kalman-based two-pass smoother.
Li, Chaojie; Yu, Xinghuo; Huang, Tingwen; He, Xing; Chaojie Li; Xinghuo Yu; Tingwen Huang; Xing He; Li, Chaojie; Huang, Tingwen; He, Xing; Yu, Xinghuo
2018-06-01
The resource allocation problem is studied and reformulated by a distributed interior point method via a -logarithmic barrier. By the facilitation of the graph Laplacian, a fully distributed continuous-time multiagent system is developed for solving the problem. Specifically, to avoid high singularity of the -logarithmic barrier at boundary, an adaptive parameter switching strategy is introduced into this dynamical multiagent system. The convergence rate of the distributed algorithm is obtained. Moreover, a novel distributed primal-dual dynamical multiagent system is designed in a smart grid scenario to seek the saddle point of dynamical economic dispatch, which coincides with the optimal solution. The dual decomposition technique is applied to transform the optimization problem into easily solvable resource allocation subproblems with local inequality constraints. The good performance of the new dynamical systems is, respectively, verified by a numerical example and the IEEE six-bus test system-based simulations.
The Mathematics of Dispatchability Revisited
NASA Technical Reports Server (NTRS)
Morris, Paul
2016-01-01
Dispatchability is an important property for the efficient execution of temporal plans where the temporal constraints are represented as a Simple Temporal Network (STN). It has been shown that every STN may be reformulated as a dispatchable STN, and dispatchability ensures that the temporal constraints need only be satisfied locally during execution. Recently it has also been shown that Simple Temporal Networks with Uncertainty, augmented with wait edges, are Dynamically Controllable provided every projection is dispatchable. Thus, the dispatchability property has both theoretical and practical interest. One thing that hampers further work in this area is the underdeveloped theory. The existing definitions are expressed in terms of algorithms, and are less suitable for mathematical proofs. In this paper, we develop a new formal theory of dispatchability in terms of execution sequences. We exploit this to prove a characterization of dispatchability involving the structural properties of the STN graph. This facilitates the potential application of the theory to uncertainty reasoning.
Synchronous parallel spatially resolved stochastic cluster dynamics
Dunn, Aaron; Dingreville, Rémi; Martínez, Enrique; ...
2016-04-23
In this work, a spatially resolved stochastic cluster dynamics (SRSCD) model for radiation damage accumulation in metals is implemented using a synchronous parallel kinetic Monte Carlo algorithm. The parallel algorithm is shown to significantly increase the size of representative volumes achievable in SRSCD simulations of radiation damage accumulation. Additionally, weak scaling performance of the method is tested in two cases: (1) an idealized case of Frenkel pair diffusion and annihilation, and (2) a characteristic example problem including defect cluster formation and growth in α-Fe. For the latter case, weak scaling is tested using both Frenkel pair and displacement cascade damage.more » To improve scaling of simulations with cascade damage, an explicit cascade implantation scheme is developed for cases in which fast-moving defects are created in displacement cascades. For the first time, simulation of radiation damage accumulation in nanopolycrystals can be achieved with a three dimensional rendition of the microstructure, allowing demonstration of the effect of grain size on defect accumulation in Frenkel pair-irradiated α-Fe.« less
NASA Astrophysics Data System (ADS)
Turnbull, Heather; Omenzetter, Piotr
2018-03-01
vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.
Hoffman, Donald D.; Prakash, Chetan
2014-01-01
Current models of visual perception typically assume that human vision estimates true properties of physical objects, properties that exist even if unperceived. However, recent studies of perceptual evolution, using evolutionary games and genetic algorithms, reveal that natural selection often drives true perceptions to extinction when they compete with perceptions tuned to fitness rather than truth: Perception guides adaptive behavior; it does not estimate a preexisting physical truth. Moreover, shifting from evolutionary biology to quantum physics, there is reason to disbelieve in preexisting physical truths: Certain interpretations of quantum theory deny that dynamical properties of physical objects have definite values when unobserved. In some of these interpretations the observer is fundamental, and wave functions are compendia of subjective probabilities, not preexisting elements of physical reality. These two considerations, from evolutionary biology and quantum physics, suggest that current models of object perception require fundamental reformulation. Here we begin such a reformulation, starting with a formal model of consciousness that we call a “conscious agent.” We develop the dynamics of interacting conscious agents, and study how the perception of objects and space-time can emerge from such dynamics. We show that one particular object, the quantum free particle, has a wave function that is identical in form to the harmonic functions that characterize the asymptotic dynamics of conscious agents; particles are vibrations not of strings but of interacting conscious agents. This allows us to reinterpret physical properties such as position, momentum, and energy as properties of interacting conscious agents, rather than as preexisting physical truths. We sketch how this approach might extend to the perception of relativistic quantum objects, and to classical objects of macroscopic scale. PMID:24987382
Best chirplet chain: Near-optimal detection of gravitational wave chirps
NASA Astrophysics Data System (ADS)
Chassande-Mottin, Éric; Pai, Archana
2006-02-01
The list of putative sources of gravitational waves possibly detected by the ongoing worldwide network of large scale interferometers has been continuously growing in the last years. For some of them, the detection is made difficult by the lack of a complete information about the expected signal. We concentrate on the case where the expected gravitational wave (GW) is a quasiperiodic frequency modulated signal i.e., a chirp. In this article, we address the question of detecting an a priori unknown GW chirp. We introduce a general chirp model and claim that it includes all physically realistic GW chirps. We produce a finite grid of template waveforms which samples the resulting set of possible chirps. If we follow the classical approach (used for the detection of inspiralling binary chirps, for instance), we would build a bank of quadrature matched filters comparing the data to each of the templates of this grid. The detection would then be achieved by thresholding the output, the maximum giving the individual which best fits the data. In the present case, this exhaustive search is not tractable because of the very large number of templates in the grid. We show that the exhaustive search can be reformulated (using approximations) as a pattern search in the time-frequency plane. This motivates an approximate but feasible alternative solution which is clearly linked to the optimal one. The time-frequency representation and pattern search algorithm are fully determined by the reformulation. This contrasts with the other time-frequency based methods presented in the literature for the same problem, where these choices are justified by “ad hoc” arguments. In particular, the time-frequency representation has to be unitary. Finally, we assess the performance, robustness and computational cost of the proposed method with several benchmarks using simulated data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gecow, Andrzej
On the way to simulating adaptive evolution of complex system describing a living object or human developed project, a fitness should be defined on node states or network external outputs. Feedbacks lead to circular attractors of these states or outputs which make it difficult to define a fitness. The main statistical effects of adaptive condition are the result of small change tendency and to appear, they only need a statistically correct size of damage initiated by evolutionary change of system. This observation allows to cut loops of feedbacks and in effect to obtain a particular statistically correct state instead ofmore » a long circular attractor which in the quenched model is expected for chaotic network with feedback. Defining fitness on such states is simple. We calculate only damaged nodes and only once. Such an algorithm is optimal for investigation of damage spreading i.e. statistical connections of structural parameters of initial change with the size of effected damage. It is a reversed-annealed method--function and states (signals) may be randomly substituted but connections are important and are preserved. The small damages important for adaptive evolution are correctly depicted in comparison to Derrida annealed approximation which expects equilibrium levels for large networks. The algorithm indicates these levels correctly. The relevant program in Pascal, which executes the algorithm for a wide range of parameters, can be obtained from the author.« less
Projection, introjection, and projective identification: a reformulation.
Malancharuvil, Joseph M
2004-12-01
In this essay, the author recommends a reformulation of the psychoanalytic concept of projection. The author proposes that projective processes are not merely defensive maneuvers that interfere with perception, but rather an essential means by which human perception is rendered possible. It is the manner in which human beings test and-evaluate reality in terms of their experiential structure, and their needs for survival and nourishment. Projection is the early phase of introjection.
NASA Astrophysics Data System (ADS)
Chen, Da-Ming; Xu, Y. F.; Zhu, W. D.
2018-05-01
An effective and reliable damage identification method for plates with a continuously scanning laser Doppler vibrometer (CSLDV) system is proposed. A new constant-speed scan algorithm is proposed to create a two-dimensional (2D) scan trajectory and automatically scan a whole plate surface. Full-field measurement of the plate can be achieved by applying the algorithm to the CSLDV system. Based on the new scan algorithm, the demodulation method is extended from one dimension for beams to two dimensions for plates to obtain a full-field operating deflection shape (ODS) of the plate from velocity response measured by the CSLDV system. The full-field ODS of an associated undamaged plate is obtained by using polynomials with proper orders to fit the corresponding full-field ODS from the demodulation method. A curvature damage index (CDI) using differences between curvatures of ODSs (CODSs) associated with ODSs that are obtained by the demodulation method and the polynomial fit is proposed to identify damage. An auxiliary CDI obtained by averaging CDIs at different excitation frequencies is defined to further assist damage identification. An experiment of an aluminum plate with damage in the form of 10.5% thickness reduction in a damage area of 0.86% of the whole scan area is conducted to investigate the proposed method. Six frequencies close to natural frequencies of the plate and one randomly selected frequency are used as sinusoidal excitation frequencies. Two 2D scan trajectories, i.e., a horizontally moving 2D scan trajectory and a vertically moving 2D scan trajectory, are used to obtain ODSs, CODSs, and CDIs of the plate. The damage is successfully identified near areas with consistently high values of CDIs at different excitation frequencies along the two 2D scan trajectories; the damage area is also identified by auxiliary CDIs.
A self-taught artificial agent for multi-physics computational model personalization.
Neumann, Dominik; Mansi, Tommaso; Itu, Lucian; Georgescu, Bogdan; Kayvanpour, Elham; Sedaghat-Hamedani, Farbod; Amr, Ali; Haas, Jan; Katus, Hugo; Meder, Benjamin; Steidl, Stefan; Hornegger, Joachim; Comaniciu, Dorin
2016-12-01
Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates) and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model. Copyright © 2016. Published by Elsevier B.V.
Imaging strategy for infants with urinary tract infection: a new algorithm.
Preda, Iulian; Jodal, Ulf; Sixt, Rune; Stokland, Eira; Hansson, Sverker
2011-03-01
We analyzed clinical data for prediction of permanent renal damage in infants with first time urinary tract infection. This population based, prospective, 3-year study included 161 male and 129 female consecutive infants with first time urinary tract infection. Ultrasonography and dimercapto-succinic acid scintigraphy were performed as acute investigations and voiding cystourethrography within 2 months. Late scintigraphy was performed after 1 year in infants with abnormality on the first dimercapto-succinic acid scan or recurrent febrile urinary tract infections. End point was renal damage on the late scan. A total of 270 patients had end point data available, of whom 70 had renal damage and 200 did not. Final kidney status was associated with C-reactive protein, serum creatinine, temperature, leukocyturia, non-Escherichia coli bacteria, anteroposterior diameter on ultrasound and recurrent febrile urinary tract infections. In stepwise multiple regression analysis C-reactive protein, creatinine, leukocyturia, anteroposterior diameter and non-E.coli bacteria were independent predictors of permanent renal damage. C-reactive protein 70 mg/l or greater combined with anteroposterior diameter 10 mm or greater had sensitivity of 87% and specificity of 59% for renal damage. An algorithm for imaging of infants with first time urinary tract infection based on these results would have eliminated 126 acute dimercapto-succinic acid scans compared to our study protocol, while missing 9 patients with permanent renal damage. C-reactive protein can be used as a predictor of permanent renal damage in infants with urinary tract infection and together with anteroposterior diameter serves as a basis for an imaging algorithm. Copyright © 2011 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Nonlinear damage identification of breathing cracks in Truss system
NASA Astrophysics Data System (ADS)
Zhao, Jie; DeSmidt, Hans
2014-03-01
The breathing cracks in truss system are detected by Frequency Response Function (FRF) based damage identification method. This method utilizes damage-induced changes of frequency response functions to estimate the severity and location of structural damage. This approach enables the possibility of arbitrary interrogation frequency and multiple inputs/outputs which greatly enrich the dataset for damage identification. The dynamical model of truss system is built using the finite element method and the crack model is based on fracture mechanics. Since the crack is driven by tensional and compressive forces of truss member, only one damage parameter is needed to represent the stiffness reduction of each truss member. Assuming that the crack constantly breathes with the exciting frequency, the linear damage detection algorithm is developed in frequency/time domain using Least Square and Newton Raphson methods. Then, the dynamic response of the truss system with breathing cracks is simulated in the time domain and meanwhile the crack breathing status for each member is determined by the feedback from real-time displacements of member's nodes. Harmonic Fourier Coefficients (HFCs) of dynamical response are computed by processing the data through convolution and moving average filters. Finally, the results show the effectiveness of linear damage detection algorithm in identifying the nonlinear breathing cracks using different combinations of HFCs and sensors.
NASA Astrophysics Data System (ADS)
Krishnan, M.; Bhowmik, B.; Hazra, B.; Pakrashi, V.
2018-02-01
In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using Recursive Principal Component Analysis (RPCA) in conjunction with Time Varying Auto-Regressive Modeling (TVAR) is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal components online using rank-one perturbation method, followed by TVAR modeling of the first transformed response, to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/non-linear-states that indicate damage. Most of the works available in the literature deal with algorithms that require windowing of the gathered data owing to their data-driven nature which renders them ineffective for online implementation. Algorithms focussed on mathematically consistent recursive techniques in a rigorous theoretical framework of structural damage detection is missing, which motivates the development of the present framework that is amenable for online implementation which could be utilized along with suite experimental and numerical investigations. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. TVAR modeling on the principal component explaining maximum variance is utilized and the damage is identified by tracking the TVAR coefficients. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data without requiring any baseline data. Numerical simulations performed on a 5-dof nonlinear system under white noise excitation and El Centro (also known as 1940 Imperial Valley earthquake) excitation, for different damage scenarios, demonstrate the robustness of the proposed algorithm. The method is further validated on results obtained from case studies involving experiments performed on a cantilever beam subjected to earthquake excitation; a two-storey benchscale model with a TMD and, data from recorded responses of UCLA factor building demonstrate the efficacy of the proposed methodology as an ideal candidate for real time, reference free structural health monitoring.
Morozoff, Edmund P; Smyth, John A
2009-01-01
Neonates with under developed lungs often require oxygen therapy. During the course of oxygen therapy, elevated levels of blood oxygenation, hyperoxemia, must be avoided or the risk of chronic lung disease or retinal damage is increased. Low levels of blood oxygen, hypoxemia, may lead to permanent brain tissue damage and, in some cases, mortality. A closed loop controller that automatically administers oxygen therapy using 3 algorithms - state machine, adaptive model, and proportional integral derivative (PID) - is applied to 7 ventilated low birth weight neonates and compared to manual oxygen therapy. All 3 automatic control algorithms demonstrated their ability to improve manual oxygen therapy by increasing periods of normoxemia and reducing the need for manual FiO(2) adjustments. Of the three control algorithms, the adaptive model showed the best performance with 0.25 manual adjustments per hour and 73% time spent within target +/- 3% SpO(2).
NASA Astrophysics Data System (ADS)
Chitchian, Shahab; Vincent, Kathleen L.; Vargas, Gracie; Motamedi, Massoud
2012-11-01
We have explored the use of optical coherence tomography (OCT) as a noninvasive tool for assessing the toxicity of topical microbicides, products used to prevent HIV, by monitoring the integrity of the vaginal epithelium. A novel feature-based segmentation algorithm using a nearest-neighbor classifier was developed to monitor changes in the morphology of vaginal epithelium. The two-step automated algorithm yielded OCT images with a clearly defined epithelial layer, enabling differentiation of normal and damaged tissue. The algorithm was robust in that it was able to discriminate the epithelial layer from underlying stroma as well as residual microbicide product on the surface. This segmentation technique for OCT images has the potential to be readily adaptable to the clinical setting for noninvasively defining the boundaries of the epithelium, enabling quantifiable assessment of microbicide-induced damage in vaginal tissue.
Shu, Jing-Xian; Li, Ying; He, Ting; Chen, Ling; Li, Xue; Zou, Lin-Lin; Yin, Lu; Li, Xiao-Hui; Wang, An-Li; Liu, Xing; Yuan, Hong
2018-01-07
BACKGROUND The explosive increase in medical literature has changed therapeutic strategies, but it is challenging for physicians to keep up-to-date on the medical literature. Scientific literature data mining on a large-scale of can be used to refresh physician knowledge and better improve the quality of disease treatment. MATERIAL AND METHODS This paper reports on a reformulated version of a data mining method called MedRank, which is a network-based algorithm that ranks therapy for a target disease based on the MEDLINE literature database. MedRank algorithm input for this study was a clear definition of the disease model; the algorithm output was the accurate recommendation of antihypertensive drugs. Hypertension with diabetes mellitus was chosen as the input disease model. The ranking output of antihypertensive drugs are based on the Joint National Committee (JNC) guidelines, one through eight, and the publication dates, ≤1977, ≤1980, ≤1984, ≤1988, ≤1993, ≤1997, ≤2003, and ≤2013. The McNemar's test was used to evaluate the efficacy of MedRank based on specific JNC guidelines. RESULTS The ranking order of antihypertensive drugs changed with the date of the published literature, and the MedRank algorithm drug recommendations had excellent consistency with the JNC guidelines in 2013 (P=1.00 from McNemar's test, Kappa=0.78, P=1.00). Moreover, the Kappa index increased over time. Sensitivity was better than specificity for MedRank; in addition, sensitivity was maintained at a high level, and specificity increased from 1997 to 2013. CONCLUSIONS The use of MedRank in ranking medical literature on hypertension with diabetes mellitus in our study suggests possible application in clinical practice; it is a potential method for supporting antihypertensive drug-prescription decisions.
Filipovic, M; Lukic, M; Djordjevic, S; Krstonosic, V; Pantelic, I; Vuleta, G; Savic, S
2017-10-01
Consumers' demand for improved products' performance, alongside with the obligation of meeting the safety and efficacy goals, presents a key reason for the reformulation, as well as a challenging task for formulators. Any change of the formulation, whether it is wanted - in order to innovate the product (new actives and raw materials) or necessary - due to, for example legislative changes (restriction of ingredients), ingredients market unavailability, new manufacturing equipment, may have a number of consequences, desired or otherwise. The aim of the study was to evaluate the influence of multiple factors - variations of the composition, manufacturing conditions and their interactions, on emulsion textural and rheological characteristics, applying the general experimental factorial design and, subsequently, to establish the approach that could replace, to some extent, certain expensive and time-consuming tests (e.g. certain sensory analysis), often required, partly or completely, after the reformulation. An experimental design strategy was utilized to reveal the influence of reformulation factors (addition of new actives, preparation method change) on textural and rheological properties of cosmetic emulsions, especially those linked to certain sensorial attributes, and droplet size. The general experimental factorial design revealed a significant direct effect of each factor, as well as their interaction effects, on certain characteristics of the system and provided some valuable information necessary for fine-tuning reformulation conditions. Upon addition of STEM-liposomes, consistency, index of viscosity, firmness and cohesiveness were decreased, as along with certain rheology parameters (elastic and viscous modulus), whereas maximal and minimal apparent viscosities and droplet size were increased. The presence of an emollient (squalene) affected all the investigated parameters in a concentration-dependent manner. Modification of the preparation method (using Ultra Turrax instead of a propeller stirrer) produced emulsions with higher firmness and maximal apparent viscosity, but led to a decrease in minimal apparent viscosity, hysteresis loop area, all monitored parameters of oscillatory rheology and droplet size. The study showed that the established approach which combines a general experimental design and instrumental, rheological and textural measurements could be appropriate, more objective, repeatable and time and money-saving step towards developing cosmetic emulsions with satisfying, improved or unchanged, consumer-acceptable performance during the reformulation. © 2017 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Selection of experimental modal data sets for damage detection via model update
NASA Technical Reports Server (NTRS)
Doebling, S. W.; Hemez, F. M.; Barlow, M. S.; Peterson, L. D.; Farhat, C.
1993-01-01
When using a finite element model update algorithm for detecting damage in structures, it is important that the experimental modal data sets used in the update be selected in a coherent manner. In the case of a structure with extremely localized modal behavior, it is necessary to use both low and high frequency modes, but many of the modes in between may be excluded. In this paper, we examine two different mode selection strategies based on modal strain energy, and compare their success to the choice of an equal number of modes based merely on lowest frequency. Additionally, some parameters are introduced to enable a quantitative assessment of the success of our damage detection algorithm when using the various set selection criteria.
Identification of Surface and Near Surface Defects and Damage Evaluation by Laser Speckle Techniques
NASA Technical Reports Server (NTRS)
Gowda, Chandrakanth H.
2001-01-01
As a part of the grant activity, a laboratory was established within the Department of Electrical Engineering for the study for measurements of surface defects and damage evaluation. This facility has been utilized for implementing several algorithms for accurate measurements of defects. Experiments were conducted using simulated images and multiple images were fused to achieve accurate measurements. During the nine months of the grants when the principal investigator was transferred in my name, experiments were conducted using simulated synthetic aperture radar (SAR) images. This proved useful when several algorithms were used on images of smooth objects with minor deformalities. Given the time constraint, the derived algorithms could not be applied to actual images of smooth objects with minor abnormalities.
Jaenke, Rachael; Barzi, Federica; McMahon, Emma; Webster, Jacqui; Brimblecombe, Julie
2017-11-02
Food product reformulation is promoted as an effective strategy to reduce population salt intake and address the associated burden of chronic disease. Salt has a number of functions in food processing, including impacting upon physical and sensory properties. Manufacturers must ensure that reformulation of foods to reduce salt does not compromise consumer acceptability. The aim of this systematic review is to determine to what extent foods can be reduced in salt without detrimental effect on consumer acceptability. Fifty studies reported on salt reduction, replacement or compensation in processed meats, breads, cheeses, soups, and miscellaneous products. For each product category, levels of salt reduction were collapsed into four groups: <40%, 40-59%, 60-79% and ≥80%. Random effects meta-analyses conducted on salt-reduced products showed that salt could be reduced by approximately 40% in breads [mean change in acceptability for reduction <40% (-0.27, 95% confidence interval (CI) -0.62, 0.08; p = 0.13)] and approximately 70% in processed meats [mean change in acceptability for reductions 60-69% (-0.18, 95% CI -0.44, 0.07; p = 0.15)] without significantly impacting consumer acceptability. Results varied for other products. These results will support manufacturers to make greater reductions in salt when reformulating food products, which in turn will contribute to a healthier food supply.
Peridynamics for failure and residual strength prediction of fiber-reinforced composites
NASA Astrophysics Data System (ADS)
Colavito, Kyle
Peridynamics is a reformulation of classical continuum mechanics that utilizes integral equations in place of partial differential equations to remove the difficulty in handling discontinuities, such as cracks or interfaces, within a body. Damage is included within the constitutive model; initiation and propagation can occur without resorting to special crack growth criteria necessary in other commonly utilized approaches. Predicting damage and residual strengths of composite materials involves capturing complex, distinct and progressive failure modes. The peridynamic laminate theory correctly predicts the load redistribution in general laminate layups in the presence of complex failure modes through the use of multiple interaction types. This study presents two approaches to obtain the critical peridynamic failure parameters necessary to capture the residual strength of a composite structure. The validity of both approaches is first demonstrated by considering the residual strength of isotropic materials. The peridynamic theory is used to predict the crack growth and final failure load in both a diagonally loaded square plate with a center crack, as well as a four-point shear specimen subjected to asymmetric loading. This study also establishes the validity of each approach by considering composite laminate specimens in which each failure mode is isolated. Finally, the failure loads and final failure modes are predicted in a laminate with various hole diameters subjected to tensile and compressive loads.
Detection of multiple damages employing best achievable eigenvectors under Bayesian inference
NASA Astrophysics Data System (ADS)
Prajapat, Kanta; Ray-Chaudhuri, Samit
2018-05-01
A novel approach is presented in this work to localize simultaneously multiple damaged elements in a structure along with the estimation of damage severity for each of the damaged elements. For detection of damaged elements, a best achievable eigenvector based formulation has been derived. To deal with noisy data, Bayesian inference is employed in the formulation wherein the likelihood of the Bayesian algorithm is formed on the basis of errors between the best achievable eigenvectors and the measured modes. In this approach, the most probable damage locations are evaluated under Bayesian inference by generating combinations of various possible damaged elements. Once damage locations are identified, damage severities are estimated using a Bayesian inference Markov chain Monte Carlo simulation. The efficiency of the proposed approach has been demonstrated by carrying out a numerical study involving a 12-story shear building. It has been found from this study that damage scenarios involving as low as 10% loss of stiffness in multiple elements are accurately determined (localized and severities quantified) even when 2% noise contaminated modal data are utilized. Further, this study introduces a term parameter impact (evaluated based on sensitivity of modal parameters towards structural parameters) to decide the suitability of selecting a particular mode, if some idea about the damaged elements are available. It has been demonstrated here that the accuracy and efficiency of the Bayesian quantification algorithm increases if damage localization is carried out a-priori. An experimental study involving a laboratory scale shear building and different stiffness modification scenarios shows that the proposed approach is efficient enough to localize the stories with stiffness modification.
Gressier, Mathilde; Privet, Lisa; Mathias, Kevin Clark; Vlassopoulos, Antonis; Vieux, Florent; Masset, Gabriel
2017-07-01
Background: Food reformulation has been identified as a strategy to improve nutritional intakes; however, little is known about the potential impact of industry-wide reformulations. Objective: The aim of the study was to model the dietary impact of food and beverage reformulation following the Nestlé Nutritional Profiling System (NNPS) standards for children, adolescents, and adults in the United States and France. Design: Dietary intakes of individuals aged ≥4 y were retrieved from nationally representative surveys: the US NHANES 2011-2012 ( n = 7456) and the French Individual and National Survey on Food Consumption ( n = 3330). The composition of all foods and beverages consumed were compared with the NNPS standards for energy, total and saturated fats, sodium, added sugars, protein, fiber, and calcium. Two scenarios were modeled. In the first, the nutrient content of foods and beverages was adjusted to the NNPS standards if they were not met. In the second, products not meeting the standards were replaced by the most nutritionally similar alternative meeting the standards from the same category. Dietary intakes were assessed against local nutrient recommendations, and analyses were stratified by body mass index and socioeconomic status. Results: Scenarios 1 and 2 showed reductions in US adults' mean daily energy (-88 and -225 kcal, respectively), saturated fats (-4.2, -6.9 g), sodium (-406, -324 mg), and added sugars (-29.4, -35.8 g). Similar trends were observed for US youth and in France. The effects on fiber and calcium were limited. In the United States, the social gradient of added sugars intake was attenuated in both scenarios compared with the baseline values. Conclusions: Potential industry-wide reformulation of the food supply could lead to higher compliance with recommendations in both the United States and France, and across all socioeconomic groups. NNPS standards seemed to be especially effective for nutrients consumed in excess. © 2017 American Society for Nutrition.
Rippin, H L; Hutchinson, J; Ocke, M; Jewell, J; Breda, J J; Cade, J E
2017-01-01
Trans fatty acids (TFA) increase the risk of mortality and chronic diseases. TFA intakes have fallen since reformulation, but may still be high in certain, vulnerable, groups. This paper investigates socio-economic and food consumption characteristics of high TFA consumers after voluntary reformulation in the Netherlands and UK. Post-reformulation data of adults aged 19-64 was analysed in two national surveys: the Dutch National Food Consumption Survey (DNFCS) collected 2007-2010 using 2*24hr recalls (N = 1933) and the UK National Diet and Nutrition Survey (NDNS) years 3&4 collected 2010/11 and 2011/12 using 4-day food diaries (N = 848). The socio-economic and food consumption characteristics of the top 10% and remaining 90% TFA consumers were compared. Means of continuous data were compared using t-tests and categorical data means using chi-squared tests. Multivariate logistic regression models indicated which socio-demographic variables were associated with high TFA consumption. In the Dutch analyses, women and those born outside the Netherlands were more likely to be top 10% TFA consumers than men and Dutch-born. In the UK unadjusted analyses there was no significant trend in socio-economic characteristics between high and lower TFA consumers, but there were regional differences in the multivariate logistic regression analyses. In the Netherlands, high TFA consumers were more likely to be consumers of cakes, buns & pastries; cream; and fried potato than the remaining 90%. Whereas in the UK, high TFA consumers were more likely to be consumers of lamb; cheese; and dairy desserts and lower crisps and savoury snack consumers. Some socio-demographic differences between high and lower TFA consumers were evident post-reformulation. High TFA consumers in the Dutch 2007-10 survey appeared more likely to obtain TFA from artificial sources than those in the UK survey. Further analyses using more up-to-date food composition databases may be needed.
Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239
Fractional Programming for Communication Systems—Part I: Power Control and Beamforming
NASA Astrophysics Data System (ADS)
Shen, Kaiming; Yu, Wei
2018-05-01
This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave-convex FP problem--in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petra, C.; Gavrea, B.; Anitescu, M.
2009-01-01
The present work aims at comparing the performance of several quadratic programming (QP) solvers for simulating large-scale frictional rigid-body systems. Traditional time-stepping schemes for simulation of multibody systems are formulated as linear complementarity problems (LCPs) with copositive matrices. Such LCPs are generally solved by means of Lemke-type algorithms and solvers such as the PATH solver proved to be robust. However, for large systems, the PATH solver or any other pivotal algorithm becomes unpractical from a computational point of view. The convex relaxation proposed by one of the authors allows the formulation of the integration step as a QPD, for whichmore » a wide variety of state-of-the-art solvers are available. In what follows we report the results obtained solving that subproblem when using the QP solvers MOSEK, OOQP, TRON, and BLMVM. OOQP is presented with both the symmetric indefinite solver MA27 and our Cholesky reformulation using the CHOLMOD package. We investigate computational performance and address the correctness of the results from a modeling point of view. We conclude that the OOQP solver, particularly with the CHOLMOD linear algebra solver, has predictable performance and memory use patterns and is far more competitive for these problems than are the other solvers.« less
A monolithic mass tracking formulation for bubbles in incompressible flow
NASA Astrophysics Data System (ADS)
Aanjaneya, Mridul; Patkar, Saket; Fedkiw, Ronald
2013-08-01
We devise a novel method for treating bubbles in incompressible flow that relies on the conservative advection of bubble mass and an associated equation of state in order to determine pressure boundary conditions inside each bubble. We show that executing this algorithm in a traditional manner leads to stability issues similar to those seen for partitioned methods for solid-fluid coupling. Therefore, we reformulate the problem monolithically. This is accomplished by first proposing a new fully monolithic approach to coupling incompressible flow to fully nonlinear compressible flow including the effects of shocks and rarefactions, and then subsequently making a number of simplifying assumptions on the air flow removing not only the nonlinearities but also the spatial variations of both the density and the pressure. The resulting algorithm is quite robust, has been shown to converge to known solutions for test problems, and has been shown to be quite effective on more realistic problems including those with multiple bubbles, merging and pinching, etc. Notably, this approach departs from a standard two-phase incompressible flow model where the air flow preserves its volume despite potentially large forces and pressure differentials in the surrounding incompressible fluid that should change its volume. Our bubbles readily change volume according to an isothermal equation of state.
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
Intelligent structural health monitoring and damage detection for light-rail bridges
DOT National Transportation Integrated Search
1998-05-01
A global damage detection algorithm for bridge-like Structures is proposed. This method provides the capability of determining the reduction in both stiffness and damping parameters of the structural elements. It is assumed the mass of the structural...
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Liu, Lie; Liu, S. T.; Yuan, Fuh-Gwo; Beard, Shawn
2011-04-01
Materials State Awareness (MSA) goes beyond traditional NDE and SHM in its challenge to characterize the current state of material damage before the onset of macro-damage such as cracks. A highly reliable, minimally invasive system for MSA of Aerospace Structures, Naval structures as well as next generation space systems is critically needed. Development of such a system will require a reliable SHM system that can detect the onset of damage well before the flaw grows to a critical size. Therefore, it is important to develop an integrated SHM system that not only detects macroscale damages in the structures but also provides an early indication of flaw precursors and microdamages. The early warning for flaw precursors and their evolution provided by an SHM system can then be used to define remedial strategies before the structural damage leads to failure, and significantly improve the safety and reliability of the structures. Thus, in this article a preliminary concept of developing the Hybrid Distributed Sensor Network Integrated with Self-learning Symbiotic Diagnostic Algorithms and Models to accurately and reliably detect the precursors to damages that occur to the structure are discussed. Experiments conducted in a laboratory environment shows potential of the proposed technique.
An improved Four-Russians method and sparsified Four-Russians algorithm for RNA folding.
Frid, Yelena; Gusfield, Dan
2016-01-01
The basic RNA secondary structure prediction problem or single sequence folding problem (SSF) was solved 35 years ago by a now well-known [Formula: see text]-time dynamic programming method. Recently three methodologies-Valiant, Four-Russians, and Sparsification-have been applied to speedup RNA secondary structure prediction. The sparsification method exploits two properties of the input: the number of subsequence Z with the endpoints belonging to the optimal folding set and the maximum number base-pairs L. These sparsity properties satisfy [Formula: see text] and [Formula: see text], and the method reduces the algorithmic running time to O(LZ). While the Four-Russians method utilizes tabling partial results. In this paper, we explore three different algorithmic speedups. We first expand the reformulate the single sequence folding Four-Russians [Formula: see text]-time algorithm, to utilize an on-demand lookup table. Second, we create a framework that combines the fastest Sparsification and new fastest on-demand Four-Russians methods. This combined method has worst-case running time of [Formula: see text], where [Formula: see text] and [Formula: see text]. Third we update the Four-Russians formulation to achieve an on-demand [Formula: see text]-time parallel algorithm. This then leads to an asymptotic speedup of [Formula: see text] where [Formula: see text] and [Formula: see text] the number of subsequence with the endpoint j belonging to the optimal folding set. The on-demand formulation not only removes all extraneous computation and allows us to incorporate more realistic scoring schemes, but leads us to take advantage of the sparsity properties. Through asymptotic analysis and empirical testing on the base-pair maximization variant and a more biologically informative scoring scheme, we show that this Sparse Four-Russians framework is able to achieve a speedup on every problem instance, that is asymptotically never worse, and empirically better than achieved by the minimum of the two methods alone.
Linear Time Algorithms to Restrict Insider Access using Multi-Policy Access Control Systems
Mell, Peter; Shook, James; Harang, Richard; Gavrila, Serban
2017-01-01
An important way to limit malicious insiders from distributing sensitive information is to as tightly as possible limit their access to information. This has always been the goal of access control mechanisms, but individual approaches have been shown to be inadequate. Ensemble approaches of multiple methods instantiated simultaneously have been shown to more tightly restrict access, but approaches to do so have had limited scalability (resulting in exponential calculations in some cases). In this work, we take the Next Generation Access Control (NGAC) approach standardized by the American National Standards Institute (ANSI) and demonstrate its scalability. The existing publicly available reference implementations all use cubic algorithms and thus NGAC was widely viewed as not scalable. The primary NGAC reference implementation took, for example, several minutes to simply display the set of files accessible to a user on a moderately sized system. In our approach, we take these cubic algorithms and make them linear. We do this by reformulating the set theoretic approach of the NGAC standard into a graph theoretic approach and then apply standard graph algorithms. We thus can answer important access control decision questions (e.g., which files are available to a user and which users can access a file) using linear time graph algorithms. We also provide a default linear time mechanism to visualize and review user access rights for an ensemble of access control mechanisms. Our visualization appears to be a simple file directory hierarchy but in reality is an automatically generated structure abstracted from the underlying access control graph that works with any set of simultaneously instantiated access control policies. It also provide an implicit mechanism for symbolic linking that provides a powerful access capability. Our work thus provides the first efficient implementation of NGAC while enabling user privilege review through a novel visualization approach. This may help transition from concept to reality the idea of using ensembles of simultaneously instantiated access control methodologies, thereby limiting insider threat. PMID:28758045
Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach
NASA Technical Reports Server (NTRS)
Warner, James E.; Hochhalter, Jacob D.
2016-01-01
This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.
A second look at the second law
NASA Astrophysics Data System (ADS)
Bejan, Adrian
1988-05-01
An account is given of Bejan's (1988) reformulation of the axioms of engineering thermodynamics in terms of heat transfer, rather than mechanics. Attention is given to graphic constructions that can be used to illustrate the properties in question, such as the 'stability star' diagram summarizing various extrema reached by certain thermodynamic properties when a closed system settles into stable (unconstrained) equilibrium. Also noted are the exergy analysis and refrigeration applications to which the present reformulation of the second law of thermodynamics can be put.
Abrão, A C; de Gutiérrez, M R; Marin, H F
1997-04-01
The present study aimed at describing the reformulated instrument used in the puerperal woman nursing consultation based on the identified diagnoses classification according to the Taxonomy-I reviewed by NANDA, and the identification of the most frequent nursing diagnoses concerning maternal breastfeeding, based on the reformulated instrument. The diagnoses found as being over 50% were: knowledge deficit (100%); sleep pattern disturbance (75%), altered sexuality patterns (75%), ineffective breastfeeding (66.6%) and impaired physical mobility (66.6%).
Towards real time speckle controlled retinal photocoagulation
NASA Astrophysics Data System (ADS)
Bliedtner, Katharina; Seifert, Eric; Stockmann, Leoni; Effe, Lisa; Brinkmann, Ralf
2016-03-01
Photocoagulation is a laser treatment widely used for the therapy of several retinal diseases. Intra- and inter-individual variations of the ocular transmission, light scattering and the retinal absorption makes it impossible to achieve a uniform effective exposure and hence a uniform damage throughout the therapy. A real-time monitoring and control of the induced damage is highly requested. Here, an approach to realize a real time optical feedback using dynamic speckle analysis is presented. A 532 nm continuous wave Nd:YAG laser is used for coagulation. During coagulation, speckle dynamics are monitored by a coherent object illumination using a 633nm HeNe laser and analyzed by a CMOS camera with a frame rate up to 1 kHz. It is obvious that a control system needs to determine whether the desired damage is achieved to shut down the system in a fraction of the exposure time. Here we use a fast and simple adaption of the generalized difference algorithm to analyze the speckle movements. This algorithm runs on a FPGA and is able to calculate a feedback value which is correlated to the thermal and coagulation induced tissue motion and thus the achieved damage. For different spot sizes (50-200 μm) and different exposure times (50-500 ms) the algorithm shows the ability to discriminate between different categories of retinal pigment epithelial damage ex-vivo in enucleated porcine eyes. Furthermore in-vivo experiments in rabbits show the ability of the system to determine tissue changes in living tissue during coagulation.
NASA Astrophysics Data System (ADS)
Chiariotti, P.; Martarelli, M.; Revel, G. M.
2017-12-01
A novel non-destructive testing procedure for delamination detection based on the exploitation of the simultaneous time and spatial sampling provided by Continuous Scanning Laser Doppler Vibrometry (CSLDV) and the feature extraction capability of Multi-Level wavelet-based processing is presented in this paper. The processing procedure consists in a multi-step approach. Once the optimal mother-wavelet is selected as the one maximizing the Energy to Shannon Entropy Ratio criterion among the mother-wavelet space, a pruning operation aiming at identifying the best combination of nodes inside the full-binary tree given by Wavelet Packet Decomposition (WPD) is performed. The pruning algorithm exploits, in double step way, a measure of the randomness of the point pattern distribution on the damage map space with an analysis of the energy concentration of the wavelet coefficients on those nodes provided by the first pruning operation. A combination of the point pattern distributions provided by each node of the ensemble node set from the pruning algorithm allows for setting a Damage Reliability Index associated to the final damage map. The effectiveness of the whole approach is proven on both simulated and real test cases. A sensitivity analysis related to the influence of noise on the CSLDV signal provided to the algorithm is also discussed, showing that the processing developed is robust enough to measurement noise. The method is promising: damages are well identified on different materials and for different damage-structure varieties.
Smart concrete slabs with embedded tubular PZT transducers for damage detection
NASA Astrophysics Data System (ADS)
Gao, Weihang; Huo, Linsheng; Li, Hongnan; Song, Gangbing
2018-02-01
The objective of this study is to develop a new concept and methodology of smart concrete slab (SCS) with embedded tubular lead zirconate titanate transducer array for image based damage detection. Stress waves, as the detecting signals, are generated by the embedded tubular piezoceramic transducers in the SCS. Tubular piezoceramic transducers are used due to their capacity of generating radially uniform stress waves in a two-dimensional concrete slab (such as bridge decks and walls), increasing the monitoring range. A circular type delay-and-sum (DAS) imaging algorithm is developed to image the active acoustic sources based on the direct response received by each sensor. After the scattering signals from the damage are obtained by subtracting the baseline response of the concrete structures from those of the defective ones, the elliptical type DAS imaging algorithm is employed to process the scattering signals and reconstruct the image of the damage. Finally, two experiments, including active acoustic source monitoring and damage imaging for concrete structures, are carried out to illustrate and demonstrate the effectiveness of the proposed method.
Daily Average Consumption of 2 Long-Acting Opioids: An Interrupted Time Series Analysis
Puenpatom, R. Amy; Szeinbach, Sheryl L.; Ma, Larry; Ben-Joseph, Rami H.; Summers, Kent H.
2012-01-01
Background Oxycodone controlled release (CR) and oxymorphone extended release (ER) are frequently prescribed long-acting opioids, which are approved for twice-daily dosing. The US Food and Drug Administration approved a reformulated crush-resistant version of oxycodone CR in April 2010. Objective To compare the daily average consumption (DACON) for oxycodone CR and for oxymorphone ER before and after the introduction of the reformulated, crush-resistant version of oxycodone CR. Methods This was a retrospective claims database analysis using pharmacy claims from the MarketScan database for the period from January 2010 through March 2011. The interrupted time series analysis was used to evaluate the impact of the introduction of reformulated oxycodone CR on the DACON of the 2 drugs—oxycodone CR and oxymorphone ER. The source of the databases included private-sector health data from more than 150 medium and large employers. All prescription claims containing oxycodone CR and oxymorphone ER dispensed to members from January 1, 2010, to March 31, 2011, were included in the analysis. Prescription claims containing duplicate National Drug Codes, missing member identification, invalid quantities or inaccurate days supply of either drug, and DACON values of <1 and >500 were removed. Results The database yielded 483,063 prescription claims for oxycodone CR and oxymorphone ER from January 1, 2010, to March 31, 2011. The final sample consisted of 411,404 oxycodone CR prescriptions (traditional and reformulated) dispensed to 85,150 members and 62,656 oxymorphone ER prescriptions dispensed to 11,931 members. Before the introduction of reformulated oxycodone CR, DACON values for the highest strength available for each of the 2 drugs were 0.51 tablets higher for oxycodone CR than for oxymorphone ER, with mean DACON values of 3.5 for oxycodone CR and 3.0 for oxymorphone ER (P <.001). The differences of mean DACON between the 2 drugs for all lower strengths were 0.46 tablets, with mean DACON values of 2.7 for oxycodone CR and 2.3 for oxymorphone ER (P <.001). After the introduction of the new formulation, the difference in mean DACON between the 2 drugs was slightly lower: 0.45 tablets for the highest-strength and 0.40 tablets for the lower-strength pairs. Regression analyses showed that the immediate and overall impact of the reformulation of oxycodone CR on the DACON of oxycodone CR was minimal, whereas no changes were seen in the DACON of oxymorphone ER. The estimated DACON for oxycodone CR decreased by 0.1 tablets, or 3.7% (P <.001), 6 months after the new formulation was introduced. Conclusion The mean DACON was 0.4 tablets per day higher for oxycodone CR compared with oxymorphone ER for all dosage strengths for the entire study period. After the introduction of the reformulated oxycodone CR, the DACON for this drug was slightly mitigated; however, there was a minimal impact on the mean differences between oxycodone CR and oxymorphone ER. PMID:24991311
DOT National Transportation Integrated Search
2014-07-01
This report presents a vibration : - : based damage : - : detection methodology that is capable of effectively capturing crack growth : near connections and crack re : - : initiation of retrofitted connections. The proposed damage detection algorithm...
Combining model based and data based techniques in a robust bridge health monitoring algorithm.
DOT National Transportation Integrated Search
2014-09-01
Structural Health Monitoring (SHM) aims to analyze civil, mechanical and aerospace systems in order to assess : incipient damage occurrence. In this project, we are concerned with the development of an algorithm within the : SHM paradigm for applicat...
Chilcoat, HD; Butler, SF; Sellers, EM; Kadakia, A; Harikrishnan, V; Haddox, JD; Dart, RC
2016-01-01
An extended‐release opioid analgesic (OxyContin, OC) was reformulated with abuse‐deterrent properties to deter abuse. This report examines changes in abuse through oral and nonoral routes, doctor‐shopping, and fatalities in 10 studies 3.5 years after reformulation. Changes in OC abuse from 1 year before to 3 years after OC reformulation were calculated, adjusted for prescription changes. Abuse of OC decreased 48% in national poison center surveillance systems, decreased 32% in a national drug treatment system, and decreased 27% among individuals prescribed OC in claims databases. Doctor‐shopping for OC decreased 50%. Overdose fatalities reported to the manufacturer decreased 65%. Abuse of other opioids without abuse‐deterrent properties decreased 2 years later than OC and with less magnitude, suggesting OC decreases were not due to broader opioid interventions. Consistent with the formulation, decreases were larger for nonoral than oral abuse. Abuse‐deterrent opioids may mitigate abuse and overdose risks among chronic pain patients. PMID:27170195
Laboratory studies of sweets re-formulated to improve their dental properties.
Grenby, T H; Mistry, M
1996-03-01
To evaluate the potential dental effects of ten new types of sugar-free sweets formulated with Lycasin or isomalt as bulk sweeteners instead of sugars. Examination of the sweets for their acidity, fermentability by oral microorganisms, influence on the demineralisation of dental enamel, and their influence on human interdental plaque pH, compared with conventional sugar-containing sweets. The importance of reducing the levels of flavouring acids in the sweets was demonstrated. It was not straightforward to evaluate chocolate products in this system, but the potential benefits of re-formulating fruit gums, lollipops, chew-bars, toffee and fudge with Lycasin or isomalt in place of sugars were shown by determining their reduced acidogenicity and fermentability compared with conventional confectionery. The extent of demineralisation of dental enamel was related to both the acidity and the fermentability of the sweets. Re-formulating sweets with reduced acidity levels and bulk sweeteners not fermentable by dental plaque microorganisms can provide a basis for improving their potential dental effects.
Computational Modeling System for Deformation and Failure in Polycrystalline Metals
2009-03-29
FIB/EHSD 3.3 The Voronoi Cell FEM for Micromechanical Modeling 3.4 VCFEM for Microstructural Damage Modeling 3.5 Adaptive Multiscale Simulations...accurate and efficient image-based micromechanical finite element model, for crystal plasticity and damage , incorporating real morphological and...topology with evolving strain localization and damage . (v) Development of multi-scaling algorithms in the time domain for compression and localization in
NASA Technical Reports Server (NTRS)
Morton, Douglas C.; DeFries, Ruth S.; Nagol, Jyoteshwar; Souza, Carlos M., Jr.; Kasischke, Eric S.; Hurtt, George C.; Dubayah, Ralph
2011-01-01
Understory fires in Amazon forests alter forest structure, species composition, and the likelihood of future disturbance. The annual extent of fire-damaged forest in Amazonia remains uncertain due to difficulties in separating burning from other types of forest damage in satellite data. We developed a new approach, the Burn Damage and Recovery (BDR) algorithm, to identify fire-related canopy damages using spatial and spectral information from multi-year time series of satellite data. The BDR approach identifies understory fires in intact and logged Amazon forests based on the reduction and recovery of live canopy cover in the years following fire damages and the size and shape of individual understory burn scars. The BDR algorithm was applied to time series of Landsat (1997-2004) and MODIS (2000-2005) data covering one Landsat scene (path/row 226/068) in southern Amazonia and the results were compared to field observations, image-derived burn scars, and independent data on selective logging and deforestation. Landsat resolution was essential for detection of burn scars less than 50 ha, yet these small burns contributed only 12% of all burned forest detected during 1997-2002. MODIS data were suitable for mapping medium (50-500 ha) and large (greater than 500 ha) burn scars that accounted for the majority of all fire-damaged forest in this study. Therefore, moderate resolution satellite data may be suitable to provide estimates of the extent of fire-damaged Amazon forest at a regional scale. In the study region, Landsat-based understory fire damages in 1999 (1508 square kilometers) were an order of magnitude higher than during the 1997-1998 El Nino event (124 square kilometers and 39 square kilometers, respectively), suggesting a different link between climate and understory fires than previously reported for other Amazon regions. The results in this study illustrate the potential to address critical questions concerning climate and fire risk in Amazon forests by applying the BDR algorithm over larger areas and longer image time series.
2014-01-01
Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154
2013-05-18
26 4.3. 1-D Heat Transfer Model with Pyrolysis and Thermal Damage...Improvements and Added Features ........................................................................31 4.3.4. Pyrolysis Model Calibration... Pyrolysis Model ................................................32 Figure 25. Updated Heat Transfer Algorithm Flow Chart
Multi-Agent Patrolling under Uncertainty and Threats.
Chen, Shaofei; Wu, Feng; Shen, Lincheng; Chen, Jing; Ramchurn, Sarvapali D
2015-01-01
We investigate a multi-agent patrolling problem where information is distributed alongside threats in environments with uncertainties. Specifically, the information and threat at each location are independently modelled as multi-state Markov chains, whose states are not observed until the location is visited by an agent. While agents will obtain information at a location, they may also suffer damage from the threat at that location. Therefore, the goal of the agents is to gather as much information as possible while mitigating the damage incurred. To address this challenge, we formulate the single-agent patrolling problem as a Partially Observable Markov Decision Process (POMDP) and propose a computationally efficient algorithm to solve this model. Building upon this, to compute patrols for multiple agents, the single-agent algorithm is extended for each agent with the aim of maximising its marginal contribution to the team. We empirically evaluate our algorithm on problems of multi-agent patrolling and show that it outperforms a baseline algorithm up to 44% for 10 agents and by 21% for 15 agents in large domains.
Damage Assessment Map from Interferometric Coherence
NASA Astrophysics Data System (ADS)
Yun, S.; Fielding, E. J.; Simons, M.; Rosen, P. A.; Owen, S. E.; Webb, F.
2010-12-01
Large earthquakes cause buildings to collapse, which often claims the lives of many. For example, 2010 Haiti earthquake killed about 230,000 people, with about 280,000 buildings collapsed or severely damaged. When a major earthquake hits an urban area, one of the most critical information for rescue operations is rapid and accurate assessment of building-collapse areas. From a study on 2003 Bam earthquake in Iran, interferometric coherence was proved useful for earthquake damage assessment (Fielding et al., 2005) when similar perpendicular baselines can be found for pre- and coseismic interferometric pairs and when there is little temporal and volume decorrelation. In this study we develop a new algorithm to create a more robust and accurate damage assessment map using interferometric coherence despite different interferometric baselines and with other decorrelation sources. We test the algorithm on a building block that recently underwent demolition, which is a proxy for building collapse due to earthquakes, for new construction in the City of Pasadena, California. The size of the building block is about 150 m E-W and 300 m N-S, and the demolition project started on April 23, 2007 and continued until January 22, 2008. After we process Japanese L-band ALOS PALSAR data with ROI_PAC, an interferometric coherence map that spans the demolition period is registered to a coherence map before the demolition, and the relative bias of the coherence values are removed, then a causality constraint is applied to enhance the change due to demolition. The results show clear change in coherence at the demolition site. We improve the signal-to-noise ratio of the coherence change at the demolition site from 17.3 (for simple difference) to 44.6 (with the new algorithm). The damage assessment map algorithm will become more useful with the emergence of InSAR missions with more frequent data acquisition, such as Sentinel-1 and DESDynI.
Flood damage estimation of companies: A comparison of Stage-Damage-Functions and Random Forests
NASA Astrophysics Data System (ADS)
Sieg, Tobias; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno
2017-04-01
The development of appropriate flood damage models plays an important role not only for the damage assessment after an event but also to develop adaptation and risk mitigation strategies. So called Stage-Damage-Functions (SDFs) are often applied as a standard approach to estimate flood damage. These functions assign a certain damage to the water depth depending on the use or other characteristics of the exposed objects. Recent studies apply machine learning algorithms like Random Forests (RFs) to model flood damage. These algorithms usually consider more influencing variables and promise to depict a more detailed insight into the damage processes. In addition they provide an inherent validation scheme. Our study focuses on direct, tangible damage of single companies. The objective is to model and validate the flood damage suffered by single companies with SDFs and RFs. The data sets used are taken from two surveys conducted after the floods in the Elbe and Danube catchments in the years 2002 and 2013 in Germany. Damage to buildings (n = 430), equipment (n = 651) as well as goods and stock (n = 530) are taken into account. The model outputs are validated via a comparison with the actual flood damage acquired by the surveys and subsequently compared with each other. This study investigates the gain in model performance with the use of additional data and the advantages and disadvantages of the RFs compared to SDFs. RFs show an increase in model performance with an increasing amount of data records over a comparatively large range, while the model performance of the SDFs is already saturated for a small set of records. In addition, the RFs are able to identify damage influencing variables, which improves the understanding of damage processes. Hence, RFs can slightly improve flood damage predictions and provide additional insight into the underlying mechanisms compared to SDFs.
Robots that can adapt like animals.
Cully, Antoine; Clune, Jeff; Tarapore, Danesh; Mouret, Jean-Baptiste
2015-05-28
Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot 'think outside the box' to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot's prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.
Robots that can adapt like animals
NASA Astrophysics Data System (ADS)
Cully, Antoine; Clune, Jeff; Tarapore, Danesh; Mouret, Jean-Baptiste
2015-05-01
Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot `think outside the box' to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robot's prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.
NASA Astrophysics Data System (ADS)
Park, Byeongjin; Sohn, Hoon
2017-07-01
Laser ultrasonic scanning, especially full-field wave propagation imaging, is attractive for damage visualization thanks to its noncontact nature, sensitivity to local damage, and high spatial resolution. However, its practicality is limited because scanning at a high spatial resolution demands a prohibitively long scanning time. Inspired by binary search, an accelerated damage visualization technique is developed to visualize damage with a reduced scanning time. The pitch-catch distance between the excitation point and the sensing point is also fixed during scanning to maintain a high signal-to-noise ratio (SNR) of measured ultrasonic responses. The approximate damage boundary is identified by examining the interactions between ultrasonic waves and damage observed at the scanning points that are sparsely selected by a binary search algorithm. Here, a time-domain laser ultrasonic response is transformed into a spatial ultrasonic domain response using a basis pursuit approach so that the interactions between ultrasonic waves and damage, such as reflections and transmissions, can be better identified in the spatial ultrasonic domain. Then, the area inside the identified damage boundary is visualized as damage. The performance of the proposed damage visualization technique is validated excusing a numerical simulation performed on an aluminum plate with a notch and experiments performed on an aluminum plate with a crack and a wind turbine blade with delamination. The proposed damage visualization technique accelerates the damage visualization process in three aspects: (1) the number of measurements that is necessary for damage visualization is dramatically reduced by a binary search algorithm; (2) the number of averaging that is necessary to achieve a high SNR is reduced by maintaining the wave propagation distance short; and (3) with the proposed technique, the same damage can be identified with a lower spatial resolution than the spatial resolution required by full-field wave propagation imaging.
NASA Astrophysics Data System (ADS)
Wodecki, Jacek; Michalak, Anna; Zimroz, Radoslaw
2018-03-01
Harsh industrial conditions present in underground mining cause a lot of difficulties for local damage detection in heavy-duty machinery. For vibration signals one of the most intuitive approaches of obtaining signal with expected properties, such as clearly visible informative features, is prefiltration with appropriately prepared filter. Design of such filter is very broad field of research on its own. In this paper authors propose a novel approach to dedicated optimal filter design using progressive genetic algorithm. Presented method is fully data-driven and requires no prior knowledge of the signal. It has been tested against a set of real and simulated data. Effectiveness of operation has been proven for both healthy and damaged case. Termination criterion for evolution process was developed, and diagnostic decision making feature has been proposed for final result determinance.
Kusano, Kristofer; Gabler, Hampton C
2014-01-01
The odds of death for a seriously injured crash victim are drastically reduced if he or she received care at a trauma center. Advanced automated crash notification (AACN) algorithms are postcrash safety systems that use data measured by the vehicles during the crash to predict the likelihood of occupants being seriously injured. The accuracy of these models are crucial to the success of an AACN. The objective of this study was to compare the predictive performance of competing injury risk models and algorithms: logistic regression, random forest, AdaBoost, naïve Bayes, support vector machine, and classification k-nearest neighbors. This study compared machine learning algorithms to the widely adopted logistic regression modeling approach. Machine learning algorithms have not been commonly studied in the motor vehicle injury literature. Machine learning algorithms may have higher predictive power than logistic regression, despite the drawback of lacking the ability to perform statistical inference. To evaluate the performance of these algorithms, data on 16,398 vehicles involved in non-rollover collisions were extracted from the NASS-CDS. Vehicles with any occupants having an Injury Severity Score (ISS) of 15 or greater were defined as those requiring victims to be treated at a trauma center. The performance of each model was evaluated using cross-validation. Cross-validation assesses how a model will perform in the future given new data not used for model training. The crash ΔV (change in velocity during the crash), damage side (struck side of the vehicle), seat belt use, vehicle body type, number of events, occupant age, and occupant sex were used as predictors in each model. Logistic regression slightly outperformed the machine learning algorithms based on sensitivity and specificity of the models. Previous studies on AACN risk curves used the same data to train and test the power of the models and as a result had higher sensitivity compared to the cross-validated results from this study. Future studies should account for future data; for example, by using cross-validation or risk presenting optimistic predictions of field performance. Past algorithms have been criticized for relying on age and sex, being difficult to measure by vehicle sensors, and inaccuracies in classifying damage side. The models with accurate damage side and including age/sex did outperform models with less accurate damage side and without age/sex, but the differences were small, suggesting that the success of AACN is not reliant on these predictors.
1981-04-01
east of Arcola Creek. The Interim Report gave a favorable recommendation for the harbor project and the results were published in House Document No. 91...Draft Reformulation Phase I GDM Re]port (Draft Stage 3 Report) The purpi se of this )rf t Stag. 3 Report Is to present the results of the Stage 3...Iirements for a small-boat harbor at Geneva State Park. Results of the bathymetric survey and sediment sampling program are presented in Appendix A. (3
Damage mapping in structural health monitoring using a multi-grid architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathews, V. John
2015-03-31
This paper presents a multi-grid architecture for tomography-based damage mapping of composite aerospace structures. The system employs an array of piezo-electric transducers bonded on the structure. Each transducer may be used as an actuator as well as a sensor. The structure is excited sequentially using the actuators and the guided waves arriving at the sensors in response to the excitations are recorded for further analysis. The sensor signals are compared to their baseline counterparts and a damage index is computed for each actuator-sensor pair. These damage indices are then used as inputs to the tomographic reconstruction system. Preliminary damage mapsmore » are reconstructed on multiple coordinate grids defined on the structure. These grids are shifted versions of each other where the shift is a fraction of the spatial sampling interval associated with each grid. These preliminary damage maps are then combined to provide a reconstruction that is more robust to measurement noise in the sensor signals and the ill-conditioned problem formulation for single-grid algorithms. Experimental results on a composite structure with complexity that is representative of aerospace structures included in the paper demonstrate that for sufficiently high sensor densities, the algorithm of this paper is capable of providing damage detection and characterization with accuracy comparable to traditional C-scan and A-scan-based ultrasound non-destructive inspection systems quickly and without human supervision.« less
Spot-shadowing optimization to mitigate damage growth in a high-energy-laser amplifier chain.
Bahk, Seung-Whan; Zuegel, Jonathan D; Fienup, James R; Widmayer, C Clay; Heebner, John
2008-12-10
A spot-shadowing technique to mitigate damage growth in a high-energy laser is studied. Its goal is to minimize the energy loss and undesirable hot spots in intermediate planes of the laser. A nonlinear optimization algorithm solves for the complex fields required to mitigate damage growth in the National Ignition Facility amplifier chain. The method is generally applicable to any large fusion laser.
A Comparison of Vibration and Oil Debris Gear Damage Detection Methods Applied to Pitting Damage
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.
2000-01-01
Helicopter Health Usage Monitoring Systems (HUMS) must provide reliable, real-time performance monitoring of helicopter operating parameters to prevent damage of flight critical components. Helicopter transmission diagnostics are an important part of a helicopter HUMS. In order to improve the reliability of transmission diagnostics, many researchers propose combining two technologies, vibration and oil monitoring, using data fusion and intelligent systems. Some benefits of combining multiple sensors to make decisions include improved detection capabilities and increased probability the event is detected. However, if the sensors are inaccurate, or the features extracted from the sensors are poor predictors of transmission health, integration of these sensors will decrease the accuracy of damage prediction. For this reason, one must verify the individual integrity of vibration and oil analysis methods prior to integrating the two technologies. This research focuses on comparing the capability of two vibration algorithms, FM4 and NA4, and a commercially available on-line oil debris monitor to detect pitting damage on spur gears in the NASA Glenn Research Center Spur Gear Fatigue Test Rig. Results from this research indicate that the rate of change of debris mass measured by the oil debris monitor is comparable to the vibration algorithms in detecting gear pitting damage.
Defect detection around rebars in concrete using focused ultrasound and reverse time migration.
Beniwal, Surendra; Ganguli, Abhijit
2015-09-01
Experimental and numerical investigations have been performed to assess the feasibility of damage detection around rebars in concrete using focused ultrasound and a Reverse Time Migration (RTM) based subsurface imaging algorithm. Since concrete is heterogeneous, an unfocused ultrasonic field will be randomly scattered by the aggregates, thereby masking information about damage(s). A focused ultrasonic field, on the other hand, increases the possibility of detection of an anomaly due to enhanced amplitude of the incident field in the focal region. Further, the RTM based reconstruction using scattered focused field data is capable of creating clear images of the inspected region of interest. Since scattering of a focused field by a damaged rebar differs qualitatively from that of an undamaged rebar, distinct images of damaged and undamaged situations are obtained in the RTM generated images. This is demonstrated with both numerical and experimental investigations. The total scattered field, acquired on the surface of the concrete medium, is used as input for the RTM algorithm to generate the subsurface image that helps to identify the damage. The proposed technique, therefore, has some advantage since knowledge about the undamaged scenario for the concrete medium is not necessary to assess its integrity. Copyright © 2015 Elsevier B.V. All rights reserved.
A Green's Function Approach to Simulate DNA Damage by the Indirect Effect
NASA Technical Reports Server (NTRS)
Plante, Ianik; Cicinotta, Francis A.
2013-01-01
The DNA damage is of fundamental importance in the understanding of the effects of ionizing radiation. DNA is damaged by the direct effect of radiation (e.g. direct ionization) and by indirect effect (e.g. damage by.OH radicals created by the radiolysis of water). Despite years of research, many questions on the DNA damage by ionizing radiation remains. In the recent years, the Green's functions of the diffusion equation (GFDE) have been used extensively in biochemistry [1], notably to simulate biochemical networks in time and space [2]. In our future work on DNA damage, we wish to use an approach based on the GFDE to refine existing models on the indirect effect of ionizing radiation on DNA. To do so, we will use the code RITRACKS [3] developed at the NASA Johnson Space Center to simulate the radiation track structure and calculate the position of radiolytic species after irradiation. We have also recently developed an efficient Monte-Carlo sampling algorithm for the GFDE of reversible reactions with an intermediate state [4], which can be modified and adapted to simulate DNA damage by free radicals. To do so, we will use the known reaction rate constants between radicals (OH, eaq, H,...) and the DNA bases, sugars and phosphates and use the sampling algorithms to simulate the diffusion of free radicals and chemical reactions with DNA. These techniques should help the understanding of the contribution of the indirect effect in the formation of DNA damage and double-strand breaks.
A Tensor-Based Structural Damage Identification and Severity Assessment
Anaissi, Ali; Makki Alamdari, Mehrisadat; Rakotoarivelo, Thierry; Khoa, Nguyen Lu Dang
2018-01-01
Early damage detection is critical for a large set of global ageing infrastructure. Structural Health Monitoring systems provide a sensor-based quantitative and objective approach to continuously monitor these structures, as opposed to traditional engineering visual inspection. Analysing these sensed data is one of the major Structural Health Monitoring (SHM) challenges. This paper presents a novel algorithm to detect and assess damage in structures such as bridges. This method applies tensor analysis for data fusion and feature extraction, and further uses one-class support vector machine on this feature to detect anomalies, i.e., structural damage. To evaluate this approach, we collected acceleration data from a sensor-based SHM system, which we deployed on a real bridge and on a laboratory specimen. The results show that our tensor method outperforms a state-of-the-art approach using the wavelet energy spectrum of the measured data. In the specimen case, our approach succeeded in detecting 92.5% of induced damage cases, as opposed to 61.1% for the wavelet-based approach. While our method was applied to bridges, its algorithm and computation can be used on other structures or sensor-data analysis problems, which involve large series of correlated data from multiple sensors. PMID:29301314
NASA Astrophysics Data System (ADS)
Hoffmann, K.; Srouji, R. G.; Hansen, S. O.
2017-12-01
The technology development within the structural design of long-span bridges in Norwegian fjords has created a need for reformulating the calculation format and the physical quantities used to describe the properties of wind and the associated wind-induced effects on bridge decks. Parts of a new probabilistic format describing the incoming, undisturbed wind is presented. It is expected that a fixed probabilistic format will facilitate a more physically consistent and precise description of the wind conditions, which in turn increase the accuracy and considerably reduce uncertainties in wind load assessments. Because the format is probabilistic, a quantification of the level of safety and uncertainty in predicted wind loads is readily accessible. A simple buffeting response calculation demonstrates the use of probabilistic wind data in the assessment of wind loads and responses. Furthermore, vortex-induced fatigue damage is discussed in relation to probabilistic wind turbulence data and response measurements from wind tunnel tests.
Sideband Algorithm for Automatic Wind Turbine Gearbox Fault Detection and Diagnosis: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zappala, D.; Tavner, P.; Crabtree, C.
2013-01-01
Improving the availability of wind turbines (WT) is critical to minimize the cost of wind energy, especially for offshore installations. As gearbox downtime has a significant impact on WT availabilities, the development of reliable and cost-effective gearbox condition monitoring systems (CMS) is of great concern to the wind industry. Timely detection and diagnosis of developing gear defects within a gearbox is an essential part of minimizing unplanned downtime of wind turbines. Monitoring signals from WT gearboxes are highly non-stationary as turbine load and speed vary continuously with time. Time-consuming and costly manual handling of large amounts of monitoring data representmore » one of the main limitations of most current CMSs, so automated algorithms are required. This paper presents a fault detection algorithm for incorporation into a commercial CMS for automatic gear fault detection and diagnosis. The algorithm allowed the assessment of gear fault severity by tracking progressive tooth gear damage during variable speed and load operating conditions of the test rig. Results show that the proposed technique proves efficient and reliable for detecting gear damage. Once implemented into WT CMSs, this algorithm can automate data interpretation reducing the quantity of information that WT operators must handle.« less
Eyles, Helen; Choi, Yeun-Hyang
2017-01-01
Interpretive, front-of-pack (FOP) nutrition labels may encourage reformulation of packaged foods. We aimed to evaluate the effects of the Health Star Rating (HSR), a new voluntary interpretive FOP labelling system, on food reformulation in New Zealand. Annual surveys of packaged food and beverage labelling and composition were undertaken in supermarkets before and after adoption of HSR i.e., 2014 to 2016. Outcomes assessed were HSR uptake by food group star ratings of products displaying a HSR label; nutritional composition of products displaying HSR compared with non-HSR products; and the composition of products displaying HSR labels in 2016 compared with their composition prior to introduction of HSR. In 2016, two years after adoption of the voluntary system, 5.3% of packaged food and beverage products surveyed (n = 807/15,357) displayed HSR labels. The highest rates of uptake were for cereals, convenience foods, packaged fruit and vegetables, sauces and spreads, and ‘Other’ products (predominantly breakfast beverages). Products displaying HSR labels had higher energy density but had significantly lower mean saturated fat, total sugar and sodium, and higher fibre, contents than non-HSR products (all p-values < 0.001). Small but statistically significant changes were observed in mean energy density (−29 KJ/100 g, p = 0.002), sodium (−49 mg/100 g, p = 0.03) and fibre (+0.5 g/100 g, p = 0.001) contents of HSR-labelled products compared with their composition prior to adoption of HSR. Reformulation of HSR-labelled products was greater than that of non-HSR-labelled products over the same period, e.g., energy reduction in HSR products was greater than in non-HSR products (−1.5% versus −0.4%), and sodium content of HSR products decreased by 4.6% while that of non-HSR products increased by 3.1%. We conclude that roll-out of the voluntary HSR labelling system is driving healthier reformulation of some products. Greater uptake across the full food supply should improve population diets. PMID:28829380
Mhurchu, Cliona Ni; Eyles, Helen; Choi, Yeun-Hyang
2017-08-22
Interpretive, front-of-pack (FOP) nutrition labels may encourage reformulation of packaged foods. We aimed to evaluate the effects of the Health Star Rating (HSR), a new voluntary interpretive FOP labelling system, on food reformulation in New Zealand. Annual surveys of packaged food and beverage labelling and composition were undertaken in supermarkets before and after adoption of HSR i.e., 2014 to 2016. Outcomes assessed were HSR uptake by food group star ratings of products displaying a HSR label; nutritional composition of products displaying HSR compared with non-HSR products; and the composition of products displaying HSR labels in 2016 compared with their composition prior to introduction of HSR. In 2016, two years after adoption of the voluntary system, 5.3% of packaged food and beverage products surveyed ( n = 807/15,357) displayed HSR labels. The highest rates of uptake were for cereals, convenience foods, packaged fruit and vegetables, sauces and spreads, and 'Other' products (predominantly breakfast beverages). Products displaying HSR labels had higher energy density but had significantly lower mean saturated fat, total sugar and sodium, and higher fibre, contents than non-HSR products (all p -values < 0.001). Small but statistically significant changes were observed in mean energy density (-29 KJ/100 g, p = 0.002), sodium (-49 mg/100 g, p = 0.03) and fibre (+0.5 g/100 g, p = 0.001) contents of HSR-labelled products compared with their composition prior to adoption of HSR. Reformulation of HSR-labelled products was greater than that of non-HSR-labelled products over the same period, e.g., energy reduction in HSR products was greater than in non-HSR products (-1.5% versus -0.4%), and sodium content of HSR products decreased by 4.6% while that of non-HSR products increased by 3.1%. We conclude that roll-out of the voluntary HSR labelling system is driving healthier reformulation of some products. Greater uptake across the full food supply should improve population diets.
A nutrient profiling system for the (re)formulation of a global food and beverage portfolio.
Vlassopoulos, Antonis; Masset, Gabriel; Charles, Veronique Rheiner; Hoover, Cassandra; Chesneau-Guillemont, Caroline; Leroy, Fabienne; Lehmann, Undine; Spieldenner, Jörg; Tee, E-Siong; Gibney, Mike; Drewnowski, Adam
2017-04-01
To describe the Nestlé Nutritional Profiling System (NNPS) developed to guide the reformulation of Nestlé products, and the results of its application in the USA and France. The NNPS is a category-specific system that calculates nutrient targets per serving as consumed, based on age-adjusted dietary guidelines. Products are aggregated into 32 food categories. The NNPS ensures that excessive amounts of nutrients to limit cannot be compensated for by adding nutrients to encourage. A study was conducted to measure changes in nutrient profiles of the most widely purchased Nestlé products from eight food categories (n = 99) in the USA and France. A comparison was made between the 2009-2010 and 2014-2015 products. The application of the NNPS between 2009-2010 and 2014-2015 was associated with an overall downwards trend for all nutrients to limit. Sodium and total sugars contents were reduced by up to 22 and 31 %, respectively. Saturated Fatty Acids and total fat reductions were less homogeneous across categories, with children products having larger reductions. Energy per serving was reduced by <10 % in most categories, while serving sizes remained unchanged. The NNPS sets feasible and yet challenging targets for public health-oriented reformulation of a varied product portfolio; its application was associated with improved nutrient density in eight major food categories in the USA and France. Confirmatory analyses are needed in other countries and food categories; the impact of such a large-scale reformulation on dietary intake and health remains to be investigated.
Glycosylation Focuses Sequence Variation in the Influenza A Virus H1 Hemagglutinin Globular Domain
Hensley, Scott E.; Hurt, Darrell E.; Bennink, Jack R.; Yewdell, Jonathan W.
2010-01-01
Antigenic drift in the influenza A virus hemagglutinin (HA) is responsible for seasonal reformulation of influenza vaccines. Here, we address an important and largely overlooked issue in antigenic drift: how does the number and location of glycosylation sites affect HA evolution in man? We analyzed the glycosylation status of all full-length H1 subtype HA sequences available in the NCBI influenza database. We devised the “flow index” (FI), a simple algorithm that calculates the tendency for viruses to gain or lose consensus glycosylation sites. The FI predicts the predominance of glycosylation states among existing strains. Our analyses show that while the number of glycosylation sites in the HA globular domain does not influence the overall magnitude of variation in defined antigenic regions, variation focuses on those regions unshielded by glycosylation. This supports the conclusion that glycosylation generally shields HA from antibody-mediated neutralization, and implies that fitness costs in accommodating oligosaccharides limit virus escape via HA hyperglycosylation. PMID:21124818
Boyen, Peter; Van Dyck, Dries; Neven, Frank; van Ham, Roeland C H J; van Dijk, Aalt D J
2011-01-01
Correlated motif mining (cmm) is the problem of finding overrepresented pairs of patterns, called motifs, in sequences of interacting proteins. Algorithmic solutions for cmm thereby provide a computational method for predicting binding sites for protein interaction. In this paper, we adopt a motif-driven approach where the support of candidate motif pairs is evaluated in the network. We experimentally establish the superiority of the Chi-square-based support measure over other support measures. Furthermore, we obtain that cmm is an np-hard problem for a large class of support measures (including Chi-square) and reformulate the search for correlated motifs as a combinatorial optimization problem. We then present the generic metaheuristic slider which uses steepest ascent with a neighborhood function based on sliding motifs and employs the Chi-square-based support measure. We show that slider outperforms existing motif-driven cmm methods and scales to large protein-protein interaction networks. The slider-implementation and the data used in the experiments are available on http://bioinformatics.uhasselt.be.
Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.
Joshi, Niranjan; Kadir, Timor; Brady, Michael
2011-08-01
Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.
Designing train-speed trajectory with energy efficiency and service quality
NASA Astrophysics Data System (ADS)
Jia, Jiannan; Yang, Kai; Yang, Lixing; Gao, Yuan; Li, Shukai
2018-05-01
With the development of automatic train operations, optimal trajectory design is significant to the performance of train operations in railway transportation systems. Considering energy efficiency and service quality, this article formulates a bi-objective train-speed trajectory optimization model to minimize simultaneously the energy consumption and travel time in an inter-station section. This article is distinct from previous studies in that more sophisticated train driving strategies characterized by the acceleration/deceleration gear, the cruising speed, and the speed-shift site are specifically considered. For obtaining an optimal train-speed trajectory which has equal satisfactory degree on both objectives, a fuzzy linear programming approach is applied to reformulate the objectives. In addition, a genetic algorithm is developed to solve the proposed train-speed trajectory optimization problem. Finally, a series of numerical experiments based on a real-world instance of Beijing-Tianjin Intercity Railway are implemented to illustrate the practicability of the proposed model as well as the effectiveness of the solution methodology.
Vectorization of a particle code used in the simulation of rarefied hypersonic flow
NASA Technical Reports Server (NTRS)
Baganoff, D.
1990-01-01
A limitation of the direct simulation Monte Carlo (DSMC) method is that it does not allow efficient use of vector architectures that predominate in current supercomputers. Consequently, the problems that can be handled are limited to those of one- and two-dimensional flows. This work focuses on a reformulation of the DSMC method with the objective of designing a procedure that is optimized to the vector architectures found on machines such as the Cray-2. In addition, it focuses on finding a better balance between algorithmic complexity and the total number of particles employed in a simulation so that the overall performance of a particle simulation scheme can be greatly improved. Simulations of the flow about a 3D blunt body are performed with 10 to the 7th particles and 4 x 10 to the 5th mesh cells. Good statistics are obtained with time averaging over 800 time steps using 4.5 h of Cray-2 single-processor CPU time.
Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression
Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.
2016-01-01
The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571
FPGA Implementation of the Coupled Filtering Method and the Affine Warping Method.
Zhang, Chen; Liang, Tianzhu; Mok, Philip K T; Yu, Weichuan
2017-07-01
In ultrasound image analysis, the speckle tracking methods are widely applied to study the elasticity of body tissue. However, "feature-motion decorrelation" still remains as a challenge for the speckle tracking methods. Recently, a coupled filtering method and an affine warping method were proposed to accurately estimate strain values, when the tissue deformation is large. The major drawback of these methods is the high computational complexity. Even the graphics processing unit (GPU)-based program requires a long time to finish the analysis. In this paper, we propose field-programmable gate array (FPGA)-based implementations of both methods for further acceleration. The capability of FPGAs on handling different image processing components in these methods is discussed. A fast and memory-saving image warping approach is proposed. The algorithms are reformulated to build a highly efficient pipeline on FPGA. The final implementations on a Xilinx Virtex-7 FPGA are at least 13 times faster than the GPU implementation on the NVIDIA graphic card (GeForce GTX 580).
Wang, Xiang-Hua; Yin, Wen-Yan; Chen, Zhi Zhang David
2013-09-09
The one-step leapfrog alternating-direction-implicit finite-difference time-domain (ADI-FDTD) method is reformulated for simulating general electrically dispersive media. It models material dispersive properties with equivalent polarization currents. These currents are then solved with the auxiliary differential equation (ADE) and then incorporated into the one-step leapfrog ADI-FDTD method. The final equations are presented in the form similar to that of the conventional FDTD method but with second-order perturbation. The adapted method is then applied to characterize (a) electromagnetic wave propagation in a rectangular waveguide loaded with a magnetized plasma slab, (b) transmission coefficient of a plane wave normally incident on a monolayer graphene sheet biased by a magnetostatic field, and (c) surface plasmon polaritons (SPPs) propagation along a monolayer graphene sheet biased by an electrostatic field. The numerical results verify the stability, accuracy and computational efficiency of the proposed one-step leapfrog ADI-FDTD algorithm in comparison with analytical results and the results obtained with the other methods.
NASA Astrophysics Data System (ADS)
Hester, David; González, Arturo
2017-06-01
Given the large number of bridges that currently have no instrumentation, there are obvious advantages in monitoring the condition of a bridge by analysing the response of a vehicle crossing it. As a result, the last two decades have seen a rise in the research attempting to solve the problem of identifying damage in a bridge from vehicle measurements. This paper examines the theoretical feasibility and practical limitations of a drive-by system in identifying damage associated to localised stiffness losses. First, the nature of the damage feature that is sought within the vehicle response needs to be characterized. For this purpose, the total vehicle response is considered to be made of 'static' and 'dynamic' components, and where the bridge has experienced a localised loss in stiffness, an additional 'damage' component. Understanding the nature of this 'damage' component is crucial to have an informed discussion on how damage can be identified and localised. Leveraging this new understanding, the authors propose a wavelet-based drive-by algorithm. By comparing the effect of the 'damage' component to other key effects defining the measurements such as 'vehicle speed', the 'road profile' and 'noise' on a wavelet contour plot, it is possible to establish if there is a frequency range where drive-by can be successful. The algorithm uses then specific frequency bands to improve the sensitivity to damage with respect to limitations imposed by Vehicle-Bridge vibrations. Recommendations on the selection of the mother wavelet and frequency band are provided. Finally, the paper discusses the impact of noise and road profile on the ability of the approach to identify damage and how periodic measurements can be effective at monitoring localised stiffness changes.
Magnusson, Roger; Reeve, Belinda
2015-01-01
Strategies to reduce excess salt consumption play an important role in preventing cardiovascular disease, which is the largest contributor to global mortality from non-communicable diseases. In many countries, voluntary food reformulation programs seek to reduce salt levels across selected product categories, guided by aspirational targets to be achieved progressively over time. This paper evaluates the industry-led salt reduction programs that operate in the United Kingdom and Australia. Drawing on theoretical concepts from the field of regulatory studies, we propose a step-wise or “responsive” approach that introduces regulatory “scaffolds” to progressively increase levels of government oversight and control in response to industry inaction or under-performance. Our model makes full use of the food industry’s willingness to reduce salt levels in products to meet reformulation targets, but recognizes that governments remain accountable for addressing major diet-related health risks. Creative regulatory strategies can assist governments to fulfill their public health obligations, including in circumstances where there are political barriers to direct, statutory regulation of the food industry. PMID:26133973
Specification Reformulation During Specification Validation
NASA Technical Reports Server (NTRS)
Benner, Kevin M.
1992-01-01
The goal of the ARIES Simulation Component (ASC) is to uncover behavioral errors by 'running' a specification at the earliest possible points during the specification development process. The problems to be overcome are the obvious ones the specification may be large, incomplete, underconstrained, and/or uncompilable. This paper describes how specification reformulation is used to mitigate these problems. ASC begins by decomposing validation into specific validation questions. Next, the specification is reformulated to abstract out all those features unrelated to the identified validation question thus creating a new specialized specification. ASC relies on a precise statement of the validation question and a careful application of transformations so as to preserve the essential specification semantics in the resulting specialized specification. This technique is a win if the resulting specialized specification is small enough so the user my easily handle any remaining obstacles to execution. This paper will: (1) describe what a validation question is; (2) outline analysis techniques for identifying what concepts are and are not relevant to a validation question; and (3) identify and apply transformations which remove these less relevant concepts while preserving those which are relevant.
NASA Astrophysics Data System (ADS)
Hsiao, Feng-Hsiag
2016-10-01
In this study, a novel approach via improved genetic algorithm (IGA)-based fuzzy observer is proposed to realise exponential optimal H∞ synchronisation and secure communication in multiple time-delay chaotic (MTDC) systems. First, an original message is inserted into the MTDC system. Then, a neural-network (NN) model is employed to approximate the MTDC system. Next, a linear differential inclusion (LDI) state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, this study proposes a delay-dependent exponential stability criterion derived in terms of Lyapunov's direct method, thus ensuring that the trajectories of the slave system approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI). Due to GA's random global optimisation search capabilities, the lower and upper bounds of the search space can be set so that the GA will seek better fuzzy observer feedback gains, accelerating feedback gain-based synchronisation via the LMI-based approach. IGA, which exhibits better performance than traditional GA, is used to synthesise a fuzzy observer to not only realise the exponential synchronisation, but also achieve optimal H∞ performance by minimizing the disturbance attenuation level and recovering the transmitted message. Finally, a numerical example with simulations is given in order to demonstrate the effectiveness of our approach.
A genetic algorithm approach to estimate glacier mass variations from GRACE data
NASA Astrophysics Data System (ADS)
Reimond, Stefan; Klinger, Beate; Krauss, Sandro; Mayer-Gürr, Torsten
2017-04-01
The application of a genetic algorithm (GA) to the inference of glacier mass variations with a point-mass modeling method is described. GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology serve as input for this study. The reformulation of the point-mass inversion method in terms of an optimization problem is motivated by two reasons: first, an improved choice of the positions of the modeled point-masses (with a particular focus on the depth parameter) is expected to increase the signal-to-noise ratio. Considering these coordinates as additional unknown parameters (besides from the mass change magnitudes) results in a highly non-linear optimization problem. The second reason is that the mass inversion from satellite tracking data is an ill-posed problem, and hence regularization becomes necessary. The main task in this context is the determination of the regularization parameter, which is typically done by means of heuristic selection rules like, e.g., the L-curve criterion. In this study, however, the challenge of selecting a suitable balancing parameter (or even a matrix) is tackled by introducing regularization to the overall optimization problem. Based on this novel approach, estimations of ice-mass changes in various alpine glacier systems (e.g. Svalbard) are presented and compared to existing results and alternative inversion methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Qiang, E-mail: jyanghkbu@gmail.com; Yang, Jiang, E-mail: qd2125@columbia.edu
This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simplemore » ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge–Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge–Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen–Cahn equations, nonlocal Cahn–Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.« less
Red light improves spermatozoa motility and does not induce oxidative DNA damage
NASA Astrophysics Data System (ADS)
Preece, Daryl; Chow, Kay W.; Gomez-Godinez, Veronica; Gustafson, Kyle; Esener, Selin; Ravida, Nicole; Durrant, Barbara; Berns, Michael W.
2017-04-01
The ability to successfully fertilize ova relies upon the swimming ability of spermatozoa. Both in humans and in animals, sperm motility has been used as a metric for the viability of semen samples. Recently, several studies have examined the efficacy of low dosage red light exposure for cellular repair and increasing sperm motility. Of prime importance to the practical application of this technique is the absence of DNA damage caused by radiation exposure. In this study, we examine the effect of 633 nm coherent, red laser light on sperm motility using a novel wavelet-based algorithm that allows for direct measurement of curvilinear velocity under red light illumination. This new algorithm gives results comparable to the standard computer-assisted sperm analysis (CASA) system. We then assess the safety of red light treatment of sperm by analyzing, (1) the levels of double-strand breaks in the DNA, and (2) oxidative damage in the sperm DNA. The results demonstrate that for the parameters used there are insignificant differences in oxidative DNA damage as a result of irradiation.
A relativistic analysis of clock synchronization
NASA Technical Reports Server (NTRS)
Thomas, J. B.
1974-01-01
The relativistic conversion between coordinate time and atomic time is reformulated to allow simpler time calculations relating analysis in solar-system barycentric coordinates (using coordinate time) with earth-fixed observations (measuring earth-bound proper time or atomic time.) After an interpretation of terms, this simplified formulation, which has a rate accuracy of about 10 to the minus 15th power, is used to explain the conventions required in the synchronization of a world wide clock network and to analyze two synchronization techniques-portable clocks and radio interferometry. Finally, pertinent experiment tests of relativity are briefly discussed in terms of the reformulated time conversion.
Thermography Inspection for Early Detection of Composite Damage in Structures During Fatigue Loading
NASA Technical Reports Server (NTRS)
Zalameda, Joseph N.; Burke, Eric R.; Parker, F. Raymond; Seebo, Jeffrey P.; Wright, Christopher W.; Bly, James B.
2012-01-01
Advanced composite structures are commonly tested under controlled loading. Understanding the initiation and progression of composite damage under load is critical for validating design concepts and structural analysis tools. Thermal nondestructive evaluation (NDE) is used to detect and characterize damage in composite structures during fatigue loading. A difference image processing algorithm is demonstrated to enhance damage detection and characterization by removing thermal variations not associated with defects. In addition, a one-dimensional multilayered thermal model is used to characterize damage. Lastly, the thermography results are compared to other inspections such as non-immersion ultrasonic inspections and computed tomography X-ray.
NASA Astrophysics Data System (ADS)
Huang, Honglan; Mao, Hanying; Mao, Hanling; Zheng, Weixue; Huang, Zhenfeng; Li, Xinxin; Wang, Xianghong
2017-12-01
Cumulative fatigue damage detection for used parts plays a key role in the process of remanufacturing engineering and is related to the service safety of the remanufactured parts. In light of the nonlinear properties of used parts caused by cumulative fatigue damage, the based nonlinear output frequency response functions detection approach offers a breakthrough to solve this key problem. First, a modified PSO-adaptive lasso algorithm is introduced to improve the accuracy of the NARMAX model under impulse hammer excitation, and then, an effective new algorithm is derived to estimate the nonlinear output frequency response functions under rectangular pulse excitation, and a based nonlinear output frequency response functions index is introduced to detect the cumulative fatigue damage in used parts. Then, a novel damage detection approach that integrates the NARMAX model and the rectangular pulse is proposed for nonlinear output frequency response functions identification and cumulative fatigue damage detection of used parts. Finally, experimental studies of fatigued plate specimens and used connecting rod parts are conducted to verify the validity of the novel approach. The obtained results reveal that the new approach can detect cumulative fatigue damages of used parts effectively and efficiently and that the various values of the based nonlinear output frequency response functions index can be used to detect the different fatigue damages or working time. Since the proposed new approach can extract nonlinear properties of systems by only a single excitation of the inspected system, it shows great promise for use in remanufacturing engineering applications.
NASA Astrophysics Data System (ADS)
Cao, Pei; Qi, Shuai; Tang, J.
2018-03-01
The impedance/admittance measurements of a piezoelectric transducer bonded to or embedded in a host structure can be used as damage indicator. When a credible model of the healthy structure, such as the finite element model, is available, using the impedance/admittance change information as input, it is possible to identify both the location and severity of damage. The inverse analysis, however, may be under-determined as the number of unknowns in high-frequency analysis is usually large while available input information is limited. The fundamental challenge thus is how to find a small set of solutions that cover the true damage scenario. In this research we cast the damage identification problem into a multi-objective optimization framework to tackle this challenge. With damage locations and severities as unknown variables, one of the objective functions is the difference between impedance-based model prediction in the parametric space and the actual measurements. Considering that damage occurrence generally affects only a small number of elements, we choose the sparsity of the unknown variables as another objective function, deliberately, the l 0 norm. Subsequently, a multi-objective Dividing RECTangles (DIRECT) algorithm is developed to facilitate the inverse analysis where the sparsity is further emphasized by sigmoid transformation. As a deterministic technique, this approach yields results that are repeatable and conclusive. In addition, only one algorithmic parameter, the number of function evaluations, is needed. Numerical and experimental case studies demonstrate that the proposed framework is capable of obtaining high-quality damage identification solutions with limited measurement information.
Evaluating 99mTc Auger electrons for targeted tumor radiotherapy by computational methods.
Tavares, Adriana Alexandre S; Tavares, João Manuel R S
2010-07-01
Technetium-99m (99mTc) has been widely used as an imaging agent but only recently has been considered for therapeutic applications. This study aims to analyze the potential use of 99mTc Auger electrons for targeted tumor radiotherapy by evaluating the DNA damage and its probability of correct repair and by studying the cellular kinetics, following 99mTc Auger electron irradiation in comparison to iodine-131 (131I) beta minus particles and astatine-211 (211At) alpha particle irradiation. Computational models were used to estimate the yield of DNA damage (fast Monte Carlo damage algorithm), the probability of correct repair (Monte Carlo excision repair algorithm), and cell kinetic effects (virtual cell radiobiology algorithm) after irradiation with the selected particles. The results obtained with the algorithms used suggested that 99mTc CKMMX (all M-shell Coster-Kroning--CK--and super-CK transitions) electrons and Auger MXY (all M-shell Auger transitions) have a therapeutic potential comparable to high linear energy transfer 211At alpha particles and higher than 131I beta minus particles. All the other 99mTc electrons had a therapeutic potential similar to 131I beta minus particles. 99mTc CKMMX electrons and Auger MXY presented a higher probability to induce apoptosis than 131I beta minus particles and a probability similar to 211At alpha particles. Based on the results here, 99mTc CKMMX electrons and Auger MXY are useful electrons for targeted tumor radiotherapy.
NASA Astrophysics Data System (ADS)
Kim, Goo; Kim, Dae Sun; Lee, Yang-Won
2013-10-01
The forest fires do much damage to our life in ecological and economic aspects. South Korea is probably more liable to suffer from the forest fire because mountain area occupies more than half of land in South Korea. They have recently launched the COMS(Communication Ocean and Meteorological Satellite) which is a geostationary satellite. In this paper, we developed forest fire detection algorithm using COMS data. Generally, forest fire detection algorithm uses characteristics of 4 and 11 micrometer brightness temperature. Our algorithm additionally uses LST(Land Surface Temperature). We confirmed the result of our fire detection algorithm using statistical data of Korea Forest Service and ASTER(Advanced Spaceborne Thermal Emission and Reflection Radiometer) images. We used the data in South Korea On April 1 and 2, 2011 because there are small and big forest fires at that time. The detection rate was 80% in terms of the frequency of the forest fires and was 99% in terms of the damaged area. Considering the number of COMS's channels and its low resolution, this result is a remarkable outcome. To provide users with the result of our algorithm, we developed a smartphone application for users JSP(Java Server Page). This application can work regardless of the smartphone's operating system. This study can be unsuitable for other areas and days because we used just two days data. To improve the accuracy of our algorithm, we need analysis using long-term data as future work.
Helicopter rotor blade frequency evolution with damage growth and signal processing
NASA Astrophysics Data System (ADS)
Roy, Niranjan; Ganguli, Ranjan
2005-05-01
Structural damage in materials evolves over time due to growth of fatigue cracks in homogenous materials and a complicated process of matrix cracking, delamination, fiber breakage and fiber matrix debonding in composite materials. In this study, a finite element model of the helicopter rotor blade is used to analyze the effect of damage growth on the modal frequencies in a qualitative manner. Phenomenological models of material degradation for homogenous and composite materials are used. Results show that damage can be detected by monitoring changes in lower as well as higher mode flap (out-of-plane bending), lag (in-plane bending) and torsion rotating frequencies, especially for composite materials where the onset of the last stage of damage of fiber breakage is most critical. Curve fits are also proposed for mathematical modeling of the relationship between rotating frequencies and cycles. Finally, since operational data are noisy and also contaminated with outliers, denoising algorithms based on recursive median filters and radial basis function neural networks and wavelets are studied and compared with a moving average filter using simulated data for improved health-monitoring application. A novel recursive median filter is designed using integer programming through genetic algorithm and is found to have comparable performance to neural networks with much less complexity and is better than wavelet denoising for outlier removal. This filter is proposed as a tool for denoising time series of damage indicators.
A Data-Driven Approach to Assess Coastal Vulnerability: Machine Learning from Hurricane Sandy
NASA Astrophysics Data System (ADS)
Foti, R.; Miller, S. M.; Montalto, F. A.
2015-12-01
As climate changes and population living along the coastlines continues to increase, an understanding of coastal risk and vulnerability to extreme events becomes increasingly important. With as many as 700,000 people living less than 3 m above the high tide line, New York City (NYC) represents one of the most threatened among major world cities. Recent events, most notably Hurricane Sandy, have put a tremendous pressure on the mosaic of economic, environmental, and social activities occurring in NYC at the interface between land and water. Using information on property damages collected by the Civil Air Patrol (CAP) after Hurricane Sandy, we developed a machine-learning based model able to identify the primary factors determining the occurrence and the severity of damages and intended to both assess and predict coastal vulnerability. The available dataset consists of categorical classifications of damages, ranging from 0 (not damaged) to 5 (damaged and flooded), and available for a sample of buildings in the NYC area. A set of algorithms, such as Logistic Regression, Gradient Boosting and Random Forest, were trained on 75% of the available dataset and tested on the remaining 25%, both training and test sets being picked at random. A combination of factors, including elevation, distance from shore, surge depth, soil type and proximity to key topographic features, such as wetlands and parks, were used as predictors. Trained algorithms were able to achieve over 85% prediction accuracy on both the training set and, most notably, the test set, with as few as six predictors, allowing a realistic depiction of the field of damage. Given their accuracy and robustness, we believe that these algorithms can be successfully applied to provide fields of coastal vulnerability for future extreme events, as well as to assess the consequences of changes, whether intended (e.g. land use change) or contingent (e.g. sea level rise), in the physical layout of NYC.
NASA Astrophysics Data System (ADS)
Ye, J.; Shi, J.; De Hoop, M. V.
2017-12-01
We develop a robust algorithm to compute seismic normal modes in a spherically symmetric, non-rotating Earth. A well-known problem is the cross-contamination of modes near "intersections" of dispersion curves for separate waveguides. Our novel computational approach completely avoids artificial degeneracies by guaranteeing orthonormality among the eigenfunctions. We extend Wiggins' and Buland's work, and reformulate the Sturm-Liouville problem as a generalized eigenvalue problem with the Rayleigh-Ritz Galerkin method. A special projection operator incorporating the gravity terms proposed by de Hoop and a displacement/pressure formulation are utilized in the fluid outer core to project out the essential spectrum. Moreover, the weak variational form enables us to achieve high accuracy across the solid-fluid boundary, especially for Stoneley modes, which have exponentially decaying behavior. We also employ the mixed finite element technique to avoid spurious pressure modes arising from discretization schemes and a numerical inf-sup test is performed following Bathe's work. In addition, the self-gravitation terms are reformulated to avoid computations outside the Earth, thanks to the domain decomposition technique. Our package enables us to study the physical properties of intersection points of waveguides. According to Okal's classification theory, the group velocities should be continuous within a branch of the same mode family. However, we have found that there will be a small "bump" near intersection points, which is consistent with Miropol'sky's observation. In fact, we can loosely regard Earth's surface and the CMB as independent waveguides. For those modes that are far from the intersection points, their eigenfunctions are localized in the corresponding waveguides. However, those that are close to intersection points will have physical features of both waveguides, which means they cannot be classified in either family. Our results improve on Okal's classification, demonstrating that dispersion curves from independent waveguides should be considered to break at intersection points.
Model-based damage evaluation of layered CFRP structures
NASA Astrophysics Data System (ADS)
Munoz, Rafael; Bochud, Nicolas; Rus, Guillermo; Peralta, Laura; Melchor, Juan; Chiachío, Juan; Chiachío, Manuel; Bond, Leonard J.
2015-03-01
An ultrasonic evaluation technique for damage identification of layered CFRP structures is presented. This approach relies on a model-based estimation procedure that combines experimental data and simulation of ultrasonic damage-propagation interactions. The CFPR structure, a [0/90]4s lay-up, has been tested in an immersion through transmission experiment, where a scan has been performed on a damaged specimen. Most ultrasonic techniques in industrial practice consider only a few features of the received signals, namely, time of flight, amplitude, attenuation, frequency contents, and so forth. In this case, once signals are captured, an algorithm is used to reconstruct the complete signal waveform and extract the unknown damage parameters by means of modeling procedures. A linear version of the data processing has been performed, where only Young modulus has been monitored and, in a second nonlinear version, the first order nonlinear coefficient β was incorporated to test the possibility of detection of early damage. The aforementioned physical simulation models are solved by the Transfer Matrix formalism, which has been extended from linear to nonlinear harmonic generation technique. The damage parameter search strategy is based on minimizing the mismatch between the captured and simulated signals in the time domain in an automated way using Genetic Algorithms. Processing all scanned locations, a C-scan of the parameter of each layer can be reconstructed, obtaining the information describing the state of each layer and each interface. Damage can be located and quantified in terms of changes in the selected parameter with a measurable extension. In the case of the nonlinear coefficient of first order, evidence of higher sensitivity to damage than imaging the linearly estimated Young Modulus is provided.
Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu
2015-02-12
The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.
Reducing salt in food; setting product-specific criteria aiming at a salt intake of 5 g per day.
Dötsch-Klerk, M; Goossens, W P M M; Meijer, G W; van het Hof, K H
2015-07-01
There is an increasing public health concern regarding high salt intake, which is generally between 9 and 12 g per day, and much higher than the 5 g recommended by World Health Organization. Several relevant sectors of the food industry are engaged in salt reduction, but it is a challenge to reduce salt in products without compromising on taste, shelf-life or expense for consumers. The objective was to develop globally applicable salt reduction criteria as guidance for product reformulation. Two sets of product group-specific sodium criteria were developed to reduce salt levels in foods to help consumers reduce their intake towards an interim intake goal of 6 g/day, and—on the longer term—5 g/day. Data modelling using survey data from the United States, United Kingdom and Netherlands was performed to assess the potential impact on population salt intake of cross-industry food product reformulation towards these criteria. Modelling with 6 and 5 g/day criteria resulted in estimated reductions in population salt intake of 25 and 30% for the three countries, respectively, the latter representing an absolute decrease in the median salt intake of 1.8-2.2 g/day. The sodium criteria described in this paper can serve as guidance for salt reduction in foods. However, to enable achieving an intake of 5 g/day, salt reduction should not be limited to product reformulation. A multi-stakeholder approach is needed to make consumers aware of the need to reduce their salt intake. Nevertheless, dietary impact modelling shows that product reformulation by food industry has the potential to contribute substantially to salt-intake reduction.
Reducing salt in food; setting product-specific criteria aiming at a salt intake of 5 g per day
Dötsch-Klerk, M; PMM Goossens, W; Meijer, G W; van het Hof, K H
2015-01-01
Background/Objectives: There is an increasing public health concern regarding high salt intake, which is generally between 9 and 12 g per day, and much higher than the 5 g recommended by World Health Organization. Several relevant sectors of the food industry are engaged in salt reduction, but it is a challenge to reduce salt in products without compromising on taste, shelf-life or expense for consumers. The objective was to develop globally applicable salt reduction criteria as guidance for product reformulation. Subjects/Methods: Two sets of product group-specific sodium criteria were developed to reduce salt levels in foods to help consumers reduce their intake towards an interim intake goal of 6 g/day, and—on the longer term—5 g/day. Data modelling using survey data from the United States, United Kingdom and Netherlands was performed to assess the potential impact on population salt intake of cross-industry food product reformulation towards these criteria. Results: Modelling with 6 and 5 g/day criteria resulted in estimated reductions in population salt intake of 25 and 30% for the three countries, respectively, the latter representing an absolute decrease in the median salt intake of 1.8–2.2 g/day. Conclusions: The sodium criteria described in this paper can serve as guidance for salt reduction in foods. However, to enable achieving an intake of 5 g/day, salt reduction should not be limited to product reformulation. A multi-stakeholder approach is needed to make consumers aware of the need to reduce their salt intake. Nevertheless, dietary impact modelling shows that product reformulation by food industry has the potential to contribute substantially to salt-intake reduction. PMID:25690867
Northrop, Paul W. C.; Pathak, Manan; Rife, Derek; ...
2015-03-09
Lithium-ion batteries are an important technology to facilitate efficient energy storage and enable a shift from petroleum based energy to more environmentally benign sources. Such systems can be utilized most efficiently if good understanding of performance can be achieved for a range of operating conditions. Mathematical models can be useful to predict battery behavior to allow for optimization of design and control. An analytical solution is ideally preferred to solve the equations of a mathematical model, as it eliminates the error that arises when using numerical techniques and is usually computationally cheap. An analytical solution provides insight into the behaviormore » of the system and also explicitly shows the effects of different parameters on the behavior. However, most engineering models, including the majority of battery models, cannot be solved analytically due to non-linearities in the equations and state dependent transport and kinetic parameters. The numerical method used to solve the system of equations describing a battery operation can have a significant impact on the computational cost of the simulation. In this paper, a model reformulation of the porous electrode pseudo three dimensional (P3D) which significantly reduces the computational cost of lithium ion battery simulation, while maintaining high accuracy, is discussed. This reformulation enables the use of the P3D model into applications that would otherwise be too computationally expensive to justify its use, such as online control, optimization, and parameter estimation. Furthermore, the P3D model has proven to be robust enough to allow for the inclusion of additional physical phenomena as understanding improves. In this study, the reformulated model is used to allow for more complicated physical phenomena to be considered for study, including thermal effects.« less
Mantilla Herrera, Ana Maria; Crino, Michelle; Erskine, Holly E; Sacks, Gary; Ananthapavan, Jaithri; Mhurchu, Cliona Ni; Lee, Yong Yi
2018-05-14
The Health Star Rating (HSR) system is a voluntary front-of-pack labelling (FoPL) initiative endorsed by the Australian government in 2014. This study examines the impact of the HSR system on pre-packaged food reformulation measured by changes in energy density between products with and without HSR. The cost-effectiveness of the HSR system was modelled using a proportional multi-state life table Markov model for the 2010 Australian population. We evaluated scenarios in which the HSR system was implemented on a voluntary and mandatory basis (i.e., HSR uptake across 6.7% and 100% of applicable products, respectively). The main outcomes were health-adjusted life years (HALYs), net costs, and incremental cost-effectiveness ratios (ICERs). These were calculated with accompanying 95% uncertainty intervals (95% UI). The model predicted that HSR-attributable reformulation leads to small reductions in mean population energy intake (voluntary: 0.98 kJ/day [95% UI: -1.08 to 2.86]; mandatory: 11.81 kJ/day [95% UI: -11.24 to 36.13]). These are likely to result in reductions in mean body weight (voluntary: 0.01 kg [95% UI: -0.01 to 0.03]; mandatory: 0.11 kg [95% UI: -0.12 to 0.32], and HALYs (voluntary: 4207 HALYs [95% UI: 2438 to 6081]; mandatory: 49,949 HALYs [95% UI: 29,291 to 72,153]). The HSR system evaluated via changes in reformulation could be considered cost-effective relative to a willingness-to-pay threshold of A$50,000 per HALY (voluntary: A$1728 per HALY [95% UI: dominant to 10,445] and mandatory: A$4752 per HALY [95% UI: dominant to 16,236]).
Impact of abuse-deterrent OxyContin on prescription opioid utilization.
Hwang, Catherine S; Chang, Hsien-Yen; Alexander, G Caleb
2015-02-01
We quantified the degree to which the August 2010 reformulation of abuse-deterrent OxyContin affected its use, as well as the use of alternative extended-release and immediate-release opioids. We used the IMS Health National Prescription Audit, a nationally representative source of prescription activity in the USA, to conduct a segmented time-series analysis of the use of OxyContin and other prescription opioids. Our primary time period of interest was 12 months prior to and following August 2010. We performed model checks and sensitivity analyses, such as adjusting for marketing and promotion, using alternative lag periods, and adding extra observation points. OxyContin sales were similar before and after the August 2010 reformulation, with approximately 550 000 monthly prescriptions. After adjusting for declines in the generic extended-release oxycodone market, the formulation change was associated with a reduction of approximately 18 000 OxyContin prescription sales per month (p = 0.02). This decline corresponded to a change in the annual growth rate of OxyContin use, from 4.9% prior to the reformulation to -23.8% during the year after the reformulation. There were no statistically significant changes associated with the sales of alternative extended-release (p = 0.42) or immediate-release (p = 0.70) opioids. Multiple sensitivity analyses supported these findings and their substantive interpretation. The market debut of abuse-deterrent OxyContin was associated with declines in its use after accounting for the simultaneous contraction of the generic extended-release oxycodone market. Further scrutiny into the effect of abuse-deterrent formulations on medication use and health outcomes is vital given their popularity in opioid drug development. Copyright © 2014 John Wiley & Sons, Ltd.
Comparison of two algorithms in the automatic segmentation of blood vessels in fundus images
NASA Astrophysics Data System (ADS)
LeAnder, Robert; Chowdary, Myneni Sushma; Mokkapati, Swapnasri; Umbaugh, Scott E.
2008-03-01
Effective timing and treatment are critical to saving the sight of patients with diabetes. Lack of screening, as well as a shortage of ophthalmologists, help contribute to approximately 8,000 cases per year of people who lose their sight to diabetic retinopathy, the leading cause of new cases of blindness [1] [2]. Timely treatment for diabetic retinopathy prevents severe vision loss in over 50% of eyes tested [1]. Fundus images can provide information for detecting and monitoring eye-related diseases, like diabetic retinopathy, which if detected early, may help prevent vision loss. Damaged blood vessels can indicate the presence of diabetic retinopathy [9]. So, early detection of damaged vessels in retinal images can provide valuable information about the presence of disease, thereby helping to prevent vision loss. Purpose: The purpose of this study was to compare the effectiveness of two blood vessel segmentation algorithms. Methods: Fifteen fundus images from the STARE database were used to develop two algorithms using the CVIPtools software environment. Another set of fifteen images were derived from the first fifteen and contained ophthalmologists' hand-drawn tracings over the retinal vessels. The ophthalmologists' tracings were used as the "gold standard" for perfect segmentation and compared with the segmented images that were output by the two algorithms. Comparisons between the segmented and the hand-drawn images were made using Pratt's Figure of Merit (FOM), Signal-to-Noise Ratio (SNR) and Root Mean Square (RMS) Error. Results: Algorithm 2 has an FOM that is 10% higher than Algorithm 1. Algorithm 2 has a 6%-higher SNR than Algorithm 1. Algorithm 2 has only 1.3% more RMS error than Algorithm 1. Conclusions: Algorithm 1 extracted most of the blood vessels with some missing intersections and bifurcations. Algorithm 2 extracted all the major blood vessels, but eradicated some vessels as well. Algorithm 2 outperformed Algorithm 1 in terms of visual clarity, FOM and SNR. The performances of these algorithms show that they have an appreciable amount of potential in helping ophthalmologists detect the severity of eye-related diseases and prevent vision loss.
Damage Detection and Verification System (DDVS) for In-Situ Health Monitoring
NASA Technical Reports Server (NTRS)
Williams, Martha K.; Lewis, Mark; Szafran, J.; Shelton, C.; Ludwig, L.; Gibson, T.; Lane, J.; Trautwein, T.
2015-01-01
Project presentation for Game Changing Program Smart Book Release. Detection and Verification System (DDVS) expands the Flat Surface Damage Detection System (FSDDS) sensory panels damage detection capabilities and includes an autonomous inspection capability utilizing cameras and dynamic computer vision algorithms to verify system health. Objectives of this formulation task are to establish the concept of operations, formulate the system requirements for a potential ISS flight experiment, and develop a preliminary design of an autonomous inspection capability system that will be demonstrated as a proof-of-concept ground based damage detection and inspection system.
Discriminating between two reformulations of SU(3) Yang-Mills theory on a lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shibata, Akihiro; Kondo, Kei-Ichi; Shinohara, Toru
2016-01-22
In order to investigate quark confinement, we give a new reformulation of the SU (N) Yang-Mills theory on a lattice and present the results of the numerical simulations of the SU (3) Yang-Mills theory on a lattice. The numerical simulations include the derivation of the linear potential for static interquark potential, i.e., non-vanishing string tension, in which the “Abelian” dominance and magnetic monopole dominance are established, confirmation of the dual Meissner effect by measuring the chromoelectric flux tube between quark-antiquark pair, the induced magnetic-monopole current, and the type of dual superconductivity, etc.
Roache, Sarah A.; Gostin, Lawrence O.
2017-01-01
Globally, soda taxes are gaining momentum as powerful interventions to discourage sugar consumption and thereby reduce the growing burden of obesity and non-communicable diseases (NCDs). Evidence from early adopters including Mexico and Berkeley, California, confirms that soda taxes can disincentivize consumption through price increases and raise revenue to support government programs. The United Kingdom’s new graduated levy on sweetened beverages is yielding yet another powerful impact: soda manufacturers are reformulating their beverages to significantly reduce the sugar content. Product reformulation – whether incentivized or mandatory – helps reduce overconsumption of sugars at the societal level, moving away from the long-standing notion of individual responsibility in favor of collective strategies to promote health. But as a matter of health equity, soda product reformulation should occur globally, especially in low- and middleincome countries (LMICs), which are increasingly targeted as emerging markets for soda and junk food and are disproportionately impacted by NCDs. As global momentum for sugar reduction increases, governments and public health advocates should harness the power of soda taxes to tackle the economic, social, and informational drivers of soda consumption, driving improvements in food environments and the public’s health. PMID:28949460
Active sensors for health monitoring of aging aerospace structures
NASA Astrophysics Data System (ADS)
Giurgiutiu, Victor; Redmond, James M.; Roach, Dennis P.; Rackow, Kirk
2000-06-01
A project to develop non-intrusive active sensors that can be applied on existing aging aerospace structures for monitoring the onset and progress of structural damage (fatigue cracks and corrosion) is presented. The state of the art in active sensors structural health monitoring and damage detection is reviewed. Methods based on (a) elastic wave propagation and (b) electro-mechanical (E/M) impedance technique are cited and briefly discussed. The instrumentation of these specimens with piezoelectric active sensors is illustrated. The main detection strategies (E/M impedance for local area detection and wave propagation for wide area interrogation) are discussed. The signal processing and damage interpretation algorithms are tuned to the specific structural interrogation method used. In the high frequency E/M impedance approach, pattern recognition methods are used to compare impedance signatures taken at various time intervals and to identify damage presence and progression from the change in these signatures. In the wave propagation approach, the acousto- ultrasonic methods identifying additional reflection generated from the damage site and changes in transmission velocity and phase are used. Both approaches benefit from the use of artificial intelligence neural networks algorithms that can extract damage features based on a learning process. Design and fabrication of a set of structural specimens representative of aging aerospace structures is presented. Three built-up specimens, (pristine, with cracks, and with corrosion damage) are used. The specimen instrumentation with active sensors fabricated at the University of South Carolina is illustrated. Preliminary results obtained with the E/M impedance method on pristine and cracked specimens are presented.
Nondamaging Retinal Laser Therapy: Rationale and Applications to the Macula.
Lavinsky, Daniel; Wang, Jenny; Huie, Philip; Dalal, Roopa; Lee, Seung Jun; Lee, Dae Yeong; Palanker, Daniel
2016-05-01
Retinal photocoagulation and nondamaging laser therapy are used for treatment of macular disorders, without understanding of the response mechanism and with no rationale for dosimetry. To establish a proper titration algorithm, we measured the range of tissue response and damage threshold. We then evaluated safety and efficacy of nondamaging retinal therapy (NRT) based on this algorithm for chronic central serous chorioretinopathy (CSCR) and macular telangiectasia (MacTel). Retinal response to laser treatment below damage threshold was assessed in pigmented rabbits by expression of the heat shock protein HSP70 and glial fibrillary acidic protein (GFAP). Energy was adjusted relative to visible titration using the Endpoint Management (EpM) algorithm. In clinical studies, 21 eyes with CSCR and 10 eyes with MacTel were treated at 30% EpM energy with high spot density (0.25-diameter spacing). Visual acuity, retinal and choroidal thickness, and subretinal fluid were monitored for 1 year. At 25% EpM energy and higher, HSP70 was expressed acutely in RPE, and GFAP upregulation in Müller cells was observed at 1 month. Damage appeared starting at 40% setting. Subretinal fluid resolved completely in 81% and partially in 19% of the CSCR patients, and visual acuity improved by 12 ± 3 letters. Lacunae in the majority of MacTel patients decreased while preserving the retinal thickness, and vision improved by 10 letters. Heat shock protein expression in response to hyperthermia helps define the therapeutic window for NRT. Lack of tissue damage enables high-density treatment to boost clinical efficacy, therapy in the fovea, and retreatments to manage chronic diseases.
Two-stage damage diagnosis based on the distance between ARMA models and pre-whitening filters
NASA Astrophysics Data System (ADS)
Zheng, H.; Mita, A.
2007-10-01
This paper presents a two-stage damage diagnosis strategy for damage detection and localization. Auto-regressive moving-average (ARMA) models are fitted to time series of vibration signals recorded by sensors. In the first stage, a novel damage indicator, which is defined as the distance between ARMA models, is applied to damage detection. This stage can determine the existence of damage in the structure. Such an algorithm uses output only and does not require operator intervention. Therefore it can be embedded in the sensor board of a monitoring network. In the second stage, a pre-whitening filter is used to minimize the cross-correlation of multiple excitations. With this technique, the damage indicator can further identify the damage location and severity when the damage has been detected in the first stage. The proposed methodology is tested using simulation and experimental data. The analysis results clearly illustrate the feasibility of the proposed two-stage damage diagnosis methodology.
Structural Health Monitoring challenges on the 10-MW offshore wind turbine model
NASA Astrophysics Data System (ADS)
Di Lorenzo, E.; Kosova, G.; Musella, U.; Manzato, S.; Peeters, B.; Marulo, F.; Desmet, W.
2015-07-01
The real-time structural damage detection on large slender structures has one of its main application on offshore Horizontal Axis Wind Turbines (HAWT). The renewable energy market is continuously pushing the wind turbine sizes and performances. This is the reason why nowadays offshore wind turbines concepts are going toward a 10 MW reference wind turbine model. The aim of the work is to perform operational analyses on the 10-MW reference wind turbine finite element model using an aeroelastic code in order to obtain long-time-low- cost simulations. The aeroelastic code allows simulating the damages in several ways: by reducing the edgewise/flapwise blades stiffness, by adding lumped masses or considering a progressive mass addiction (i.e. ice on the blades). The damage detection is then performed by means of Operational Modal Analysis (OMA) techniques. Virtual accelerometers are placed in order to simulate real measurements and to estimate the modal parameters. The feasibility of a robust damage detection on the model has been performed on the HAWT model in parked conditions. The situation is much more complicated in case of operating wind turbines because the time periodicity of the structure need to be taken into account. Several algorithms have been implemented and tested in the simulation environment. They are needed in order to carry on a damage detection simulation campaign and develop a feasible real-time damage detection method. In addition to these algorithms, harmonic removal tools are needed in order to dispose of the harmonics due to the rotation.
Automatic Detection of Storm Damages Using High-Altitude Photogrammetric Imaging
NASA Astrophysics Data System (ADS)
Litkey, P.; Nurminen, K.; Honkavaara, E.
2013-05-01
The risks of storms that cause damage in forests are increasing due to climate change. Quickly detecting fallen trees, assessing the amount of fallen trees and efficiently collecting them are of great importance for economic and environmental reasons. Visually detecting and delineating storm damage is a laborious and error-prone process; thus, it is important to develop cost-efficient and highly automated methods. Objective of our research project is to investigate and develop a reliable and efficient method for automatic storm damage detection, which is based on airborne imagery that is collected after a storm. The requirements for the method are the before-storm and after-storm surface models. A difference surface is calculated using two DSMs and the locations where significant changes have appeared are automatically detected. In our previous research we used four-year old airborne laser scanning surface model as the before-storm surface. The after-storm DSM was provided from the photogrammetric images using the Next Generation Automatic Terrain Extraction (NGATE) algorithm of Socet Set software. We obtained 100% accuracy in detection of major storm damages. In this investigation we will further evaluate the sensitivity of the storm-damage detection process. We will investigate the potential of national airborne photography, that is collected at no-leaf season, to automatically produce a before-storm DSM using image matching. We will also compare impact of the terrain extraction algorithm to the results. Our results will also promote the potential of national open source data sets in the management of natural disasters.
Guided wave propagation and spectral element method for debonding damage assessment in RC structures
NASA Astrophysics Data System (ADS)
Wang, Ying; Zhu, Xinqun; Hao, Hong; Ou, Jinping
2009-07-01
A concrete-steel interface spectral element is developed to study the guided wave propagation along the steel rebar in the concrete. Scalar damage parameters characterizing changes in the interface (debonding damage) are incorporated into the formulation of the spectral finite element that is used for damage detection of reinforced concrete structures. Experimental tests are carried out on a reinforced concrete beam with embedded piezoelectric elements to verify the performance of the proposed model and algorithm. Parametric studies are performed to evaluate the effect of different damage scenarios on wave propagation in the reinforced concrete structures. Numerical simulations and experimental results show that the method is effective to model wave propagation along the steel rebar in concrete and promising to detect damage in the concrete-steel interface.
Simple and efficient self-healing strategy for damaged complex networks
NASA Astrophysics Data System (ADS)
Gallos, Lazaros K.; Fefferman, Nina H.
2015-11-01
The process of destroying a complex network through node removal has been the subject of extensive interest and research. Node loss typically leaves the network disintegrated into many small and isolated clusters. Here we show that these clusters typically remain close to each other and we suggest a simple algorithm that is able to reverse the inflicted damage by restoring the network's functionality. After damage, each node decides independently whether to create a new link depending on the fraction of neighbors it has lost. In addition to relying only on local information, where nodes do not need knowledge of the global network status, we impose the additional constraint that new links should be as short as possible (i.e., that the new edge completes a shortest possible new cycle). We demonstrate that this self-healing method operates very efficiently, both in model and real networks. For example, after removing the most connected airports in the USA, the self-healing algorithm rejoined almost 90% of the surviving airports.
A haptic-inspired audio approach for structural health monitoring decision-making
NASA Astrophysics Data System (ADS)
Mao, Zhu; Todd, Michael; Mascareñas, David
2015-03-01
Haptics is the field at the interface of human touch (tactile sensation) and classification, whereby tactile feedback is used to train and inform a decision-making process. In structural health monitoring (SHM) applications, haptic devices have been introduced and applied in a simplified laboratory scale scenario, in which nonlinearity, representing the presence of damage, was encoded into a vibratory manual interface. In this paper, the "spirit" of haptics is adopted, but here ultrasonic guided wave scattering information is transformed into audio (rather than tactile) range signals. After sufficient training, the structural damage condition, including occurrence and location, can be identified through the encoded audio waveforms. Different algorithms are employed in this paper to generate the transformed audio signals and the performance of each encoding algorithms is compared, and also compared with standard machine learning classifiers. In the long run, the haptic decision-making is aiming to detect and classify structural damages in a more rigorous environment, and approaching a baseline-free fashion with embedded temperature compensation.
Satellite change detection of forest damage near the Chernobyl accident
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClellan, G.E.; Anno, G.H.
1992-01-01
A substantial amount of forest within a few kilometers of the Chernobyl nuclear reactor station was badly contaminated with radionuclides by the April 26, 1986, explosion and ensuing fire at reactor No. 4. Radiation doses to conifers in some areas were sufficient to cause discoloration of needles within a few weeks. Other areas, receiving smaller doses, showed foliage changes beginning 6 months to a year later. Multispectral imagery available from Landsat sensors is especially suited for monitoring such changes in vegetation. A series of Landsat Thematic Mapper images was developed that span the 2 yr following the accident. Quantitative dosemore » estimation for the exposed conifers requires an objective change detection algorithm and knowledge of the dose-time response of conifers to ionizing radiation. Pacific-Sierra Research Corporation's Hyperscout{trademark} algorithm is based on an advanced, sensitive technique for change detection particularly suited for multispectral images. The Hyperscout algorithm has been used to assess radiation damage to the forested areas around the Chernobyl nuclear power plant.« less
NASA Astrophysics Data System (ADS)
Ramírez-López, A.; Romero-Romo, M. A.; Muñoz-Negron, D.; López-Ramírez, S.; Escarela-Pérez, R.; Duran-Valencia, C.
2012-10-01
Computational models are developed to create grain structures using mathematical algorithms based on the chaos theory such as cellular automaton, geometrical models, fractals, and stochastic methods. Because of the chaotic nature of grain structures, some of the most popular routines are based on the Monte Carlo method, statistical distributions, and random walk methods, which can be easily programmed and included in nested loops. Nevertheless, grain structures are not well defined as the results of computational errors and numerical inconsistencies on mathematical methods. Due to the finite definition of numbers or the numerical restrictions during the simulation of solidification, damaged images appear on the screen. These images must be repaired to obtain a good measurement of grain geometrical properties. Some mathematical algorithms were developed to repair, measure, and characterize grain structures obtained from cellular automata in the present work. An appropriate measurement of grain size and the corrected identification of interfaces and length are very important topics in materials science because they are the representation and validation of mathematical models with real samples. As a result, the developed algorithms are tested and proved to be appropriate and efficient to eliminate the errors and characterize the grain structures.
NASA Astrophysics Data System (ADS)
Lakshmi, K.; Rama Mohan Rao, A.
2014-10-01
In this paper, a novel output-only damage-detection technique based on time-series models for structural health monitoring in the presence of environmental variability and measurement noise is presented. The large amount of data obtained in the form of time-history response is transformed using principal component analysis, in order to reduce the data size and thereby improve the computational efficiency of the proposed algorithm. The time instant of damage is obtained by fitting the acceleration time-history data from the structure using autoregressive (AR) and AR with exogenous inputs time-series prediction models. The probability density functions (PDFs) of damage features obtained from the variances of prediction errors corresponding to references and healthy current data are found to be shifting from each other due to the presence of various uncertainties such as environmental variability and measurement noise. Control limits using novelty index are obtained using the distances of the peaks of the PDF curves in healthy condition and used later for determining the current condition of the structure. Numerical simulation studies have been carried out using a simply supported beam and also validated using an experimental benchmark data corresponding to a three-storey-framed bookshelf structure proposed by Los Alamos National Laboratory. Studies carried out in this paper clearly indicate the efficiency of the proposed algorithm for damage detection in the presence of measurement noise and environmental variability.
NASA Astrophysics Data System (ADS)
Nag, A.; Mahapatra, D. Roy; Gopalakrishnan, S.
2003-10-01
A hierarchical Genetic Algorithm (GA) is implemented in a high peformance spectral finite element software for identification of delaminations in laminated composite beams. In smart structural health monitoring, the number of delaminations (or any other modes of damage) as well as their locations and sizes are no way completely known. Only known are the healthy structural configuration (mass, stiffness and damping matrices updated from previous phases of monitoring), sensor measurements and some information about the load environment. To handle such enormous complexity, a hierarchical GA is used to represent heterogeneous population consisting of damaged structures with different number of delaminations and their evolution process to identify the correct damage configuration in the structures under monitoring. We consider this similarity with the evolution process in heterogeneous population of species in nature to develop an automated procedure to decide on what possible damaged configuration might have produced the deviation in the measured signals. Computational efficiency of the identification task is demonstrated by considering a single delamination. The behavior of fitness function in GA, which is an important factor for fast convergence, is studied for single and multiple delaminations. Several advantages of the approach in terms of computational cost is discussed. Beside tackling different other types of damage configurations, further scope of research for development of hybrid soft-computing modules are highlighted.
Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.
2016-01-01
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230
Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C
2015-03-01
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.
Providing structural modules with self-integrity monitoring
NASA Astrophysics Data System (ADS)
Walton, W. B.; Ibanez, P.; Yessaie, G.
1988-08-01
With the advent of complex space structures (i.e., U.S. Space Station), the need for methods for remotely detecting structural damage will become greater. Some of these structures will have hundreds of individual structural elements (i.e., strut members). Should some of them become damaged, it could be virtually impossible to detect it using visual or similar inspection techniques. The damage of only a few individual members may or may not be a serious problem. However, should a significant number of the members be damaged, a significant problem could be created. The implementation of an appropriate remote damage detection scheme would greatly reduce the likelihood of a serious problem related to structural damage ever occurring. This report presents the results of the research conducted on remote structural damage detection approaches and the related mathematical algorithms. The research was conducted for the Small Business Innovation and Research (SBIR) Phase 2 National Aeronautics and Space Administration (NASA) Contract NAS7-961.
Providing structural modules with self-integrity monitoring
NASA Technical Reports Server (NTRS)
Walton, W. B.; Ibanez, P.; Yessaie, G.
1988-01-01
With the advent of complex space structures (i.e., U.S. Space Station), the need for methods for remotely detecting structural damage will become greater. Some of these structures will have hundreds of individual structural elements (i.e., strut members). Should some of them become damaged, it could be virtually impossible to detect it using visual or similar inspection techniques. The damage of only a few individual members may or may not be a serious problem. However, should a significant number of the members be damaged, a significant problem could be created. The implementation of an appropriate remote damage detection scheme would greatly reduce the likelihood of a serious problem related to structural damage ever occurring. This report presents the results of the research conducted on remote structural damage detection approaches and the related mathematical algorithms. The research was conducted for the Small Business Innovation and Research (SBIR) Phase 2 National Aeronautics and Space Administration (NASA) Contract NAS7-961.
Graph cuts via l1 norm minimization.
Bhusnurmath, Arvind; Taylor, Camillo J
2008-10-01
Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latour, P.R.
Revolutionary changes in quality specifications (number, complexity, uncertainty, economic sensitivity) for reformulated gasolines (RFG) and low-sulfur diesels (LSD) are being addressed by powerful, new, computer-integrated manufacturing technology for Refinery Information Systems and Advanced Process Control (RIS/APC). This paper shows how the five active RIS/APC functions: performance measurement, optimization, scheduling, control and integration are used to manufacture new, clean fuels competitively. With current industry spending for this field averaging 2 to 3 cents/bbl crude, many refineries can capture 50 to 100 cents/bbl if the technology is properly employed and sustained throughout refining operations, organizations, and businesses.
Discrete mathematical physics and particle modeling
NASA Astrophysics Data System (ADS)
Greenspan, D.
The theory and application of the arithmetic approach to the foundations of both Newtonian and special relativistic mechanics are explored. Using only arithmetic, a reformulation of the Newtonian approach is given for: gravity; particle modeling of solids, liquids, and gases; conservative modeling of laminar and turbulent fluid flow, heat conduction, and elastic vibration; and nonconservative modeling of heat convection, shock-wave generation, the liquid drop problem, porous flow, the interface motion of a melting solid, soap films, string vibrations, and solitons. An arithmetic reformulation of special relativistic mechanics is given for theory in one space dimension, relativistic harmonic oscillation, and theory in three space dimensions. A speculative quantum mechanical model of vibrations in the water molecule is also discussed.
National health insurance reconsidered: dilemmas and opportunities.
Battistella, R M; Weil, T P
1989-01-01
Changing social and economic constraints are precipitating a reformulation of the role of government in the provision of social welfare services. The authors conclude that government intervention in the health sector is bound to expand rather than contract because centralization is the key to reconciling otherwise divergent political demands for spending controls and greater equality of access to quality care for the increasing number of uninsured or underinsured persons. In the past eight years, the federal government has unleashed competitive market principles that have had negative side effects on the nation's health services. Payers, providers, and consumers will likely seek to protect themselves by forming coalitions, as happened recently in Massachusetts where the law now requires employers to provide minimum health insurance benefits to their employees. Escalating pressures to correct the damages from short-term piecemeal solutions to problems of health finance and delivery will provide the chief dynamic for universal health insurance in the United States. New economic, social, and political realities suggest, however, an eclectic strategy for attaining this goal that bears little resemblance to the conventional wisdom that guided health policy throughout the postwar period.
NASA Astrophysics Data System (ADS)
Rainieri, Carlo; Fabbrocino, Giovanni
2015-08-01
In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous monitoring purposes. They are documented in the last sections of the paper.
Model-Based Fatigue Prognosis of Fiber-Reinforced Laminates Exhibiting Concurrent Damage Mechanisms
NASA Technical Reports Server (NTRS)
Corbetta, M.; Sbarufatti, C.; Saxena, A.; Giglio, M.; Goebel, K.
2016-01-01
Prognostics of large composite structures is a topic of increasing interest in the field of structural health monitoring for aerospace, civil, and mechanical systems. Along with recent advancements in real-time structural health data acquisition and processing for damage detection and characterization, model-based stochastic methods for life prediction are showing promising results in the literature. Among various model-based approaches, particle-filtering algorithms are particularly capable in coping with uncertainties associated with the process. These include uncertainties about information on the damage extent and the inherent uncertainties of the damage propagation process. Some efforts have shown successful applications of particle filtering-based frameworks for predicting the matrix crack evolution and structural stiffness degradation caused by repetitive fatigue loads. Effects of other damage modes such as delamination, however, are not incorporated in these works. It is well established that delamination and matrix cracks not only co-exist in most laminate structures during the fatigue degradation process but also affect each other's progression. Furthermore, delamination significantly alters the stress-state in the laminates and accelerates the material degradation leading to catastrophic failure. Therefore, the work presented herein proposes a particle filtering-based framework for predicting a structure's remaining useful life with consideration of multiple co-existing damage-mechanisms. The framework uses an energy-based model from the composite modeling literature. The multiple damage-mode model has been shown to suitably estimate the energy release rate of cross-ply laminates as affected by matrix cracks and delamination modes. The model is also able to estimate the reduction in stiffness of the damaged laminate. This information is then used in the algorithms for life prediction capabilities. First, a brief summary of the energy-based damage model is provided. Then, the paper describes how the model is embedded within the prognostic framework and how the prognostics performance is assessed using observations from run-to-failure experiments
A Network Analysis of Countries’ Export Flows: Firm Grounds for the Building Blocks of the Economy
Caldarelli, Guido; Cristelli, Matthieu; Gabrielli, Andrea; Pietronero, Luciano; Scala, Antonio; Tacchella, Andrea
2012-01-01
In this paper we analyze the bipartite network of countries and products from UN data on country production. We define the country-country and product-product projected networks and introduce a novel method of filtering information based on elements’ similarity. As a result we find that country clustering reveals unexpected socio-geographic links among the most competing countries. On the same footings the products clustering can be efficiently used for a bottom-up classification of produced goods. Furthermore we mathematically reformulate the “reflections method” introduced by Hidalgo and Hausmann as a fixpoint problem; such formulation highlights some conceptual weaknesses of the approach. To overcome such an issue, we introduce an alternative methodology (based on biased Markov chains) that allows to rank countries in a conceptually consistent way. Our analysis uncovers a strong non-linear interaction between the diversification of a country and the ubiquity of its products, thus suggesting the possible need of moving towards more efficient and direct non-linear fixpoint algorithms to rank countries and products in the global market. PMID:23094044
Chen, Yun; Yang, Hui
2016-01-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581
Computations of Wall Distances Based on Differential Equations
NASA Technical Reports Server (NTRS)
Tucker, Paul G.; Rumsey, Chris L.; Spalart, Philippe R.; Bartels, Robert E.; Biedron, Robert T.
2004-01-01
The use of differential equations such as Eikonal, Hamilton-Jacobi and Poisson for the economical calculation of the nearest wall distance d, which is needed by some turbulence models, is explored. Modifications that could palliate some turbulence-modeling anomalies are also discussed. Economy is of especial value for deforming/adaptive grid problems. For these, ideally, d is repeatedly computed. It is shown that the Eikonal and Hamilton-Jacobi equations can be easy to implement when written in implicit (or iterated) advection and advection-diffusion equation analogous forms, respectively. These, like the Poisson Laplacian term, are commonly occurring in CFD solvers, allowing the re-use of efficient algorithms and code components. The use of the NASA CFL3D CFD program to solve the implicit Eikonal and Hamilton-Jacobi equations is explored. The re-formulated d equations are easy to implement, and are found to have robust convergence. For accurate Eikonal solutions, upwind metric differences are required. The Poisson approach is also found effective, and easiest to implement. Modified distances are not found to affect global outputs such as lift and drag significantly, at least in common situations such as airfoil flows.
A simple algorithm to improve the performance of the WENO scheme on non-uniform grids
NASA Astrophysics Data System (ADS)
Huang, Wen-Feng; Ren, Yu-Xin; Jiang, Xiong
2018-02-01
This paper presents a simple approach for improving the performance of the weighted essentially non-oscillatory (WENO) finite volume scheme on non-uniform grids. This technique relies on the reformulation of the fifth-order WENO-JS (WENO scheme presented by Jiang and Shu in J. Comput. Phys. 126:202-228, 1995) scheme designed on uniform grids in terms of one cell-averaged value and its left and/or right interfacial values of the dependent variable. The effect of grid non-uniformity is taken into consideration by a proper interpolation of the interfacial values. On non-uniform grids, the proposed scheme is much more accurate than the original WENO-JS scheme, which was designed for uniform grids. When the grid is uniform, the resulting scheme reduces to the original WENO-JS scheme. In the meantime, the proposed scheme is computationally much more efficient than the fifth-order WENO scheme designed specifically for the non-uniform grids. A number of numerical test cases are simulated to verify the performance of the present scheme.
Chen, Yun; Yang, Hui
2016-12-14
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
Coupling fluid-structure interaction with phase-field fracture
NASA Astrophysics Data System (ADS)
Wick, Thomas
2016-12-01
In this work, a concept for coupling fluid-structure interaction with brittle fracture in elasticity is proposed. The fluid-structure interaction problem is modeled in terms of the arbitrary Lagrangian-Eulerian technique and couples the isothermal, incompressible Navier-Stokes equations with nonlinear elastodynamics using the Saint-Venant Kirchhoff solid model. The brittle fracture model is based on a phase-field approach for cracks in elasticity and pressurized elastic solids. In order to derive a common framework, the phase-field approach is re-formulated in Lagrangian coordinates to combine it with fluid-structure interaction. A crack irreversibility condition, that is mathematically characterized as an inequality constraint in time, is enforced with the help of an augmented Lagrangian iteration. The resulting problem is highly nonlinear and solved with a modified Newton method (e.g., error-oriented) that specifically allows for a temporary increase of the residuals. The proposed framework is substantiated with several numerical tests. In these examples, computational stability in space and time is shown for several goal functionals, which demonstrates reliability of numerical modeling and algorithmic techniques. But also current limitations such as the necessity of using solid damping are addressed.
Quasilinear models through the lens of resolvent analysis
NASA Astrophysics Data System (ADS)
McKeon, Beverley; Chini, Greg
2017-11-01
Quasilinear (QL) and generalized quasilinear (GQL) analyses, e.g. Marston et al., also variously described as statistical state dynamics models, e.g., Farrell et al., restricted nonlinear models, e.g. Thomas et al., or 2D/3C models, e.g. Gayme et al., have achieved considerable success in recovering the mean velocity profile for a range of turbulent flows. In QL approaches, the portion of the velocity field that can be represented as streamwise constant, i.e. with streamwise wavenumber kx = 0 , is fully resolved, while the streamwise-varying dynamics are linearized about the streamwise-constant field; that is, only those nonlinear interactions that drive the streamwise-constant field are retained, and the non-streamwise constant ``fluctuation-fluctuation'' interactions are ignored. Here, we show how these QL approaches can be reformulated in terms of the closed-loop resolvent analysis of McKeon & Sharma (2010), which enables us to identify reasons for their evident success as well as algorithms for their efficient computation. The support of ONR through Grant No. N00014-17-2307 is gratefully acknowledged.
NASA Astrophysics Data System (ADS)
Krishnan, M.; Bhowmik, B.; Tiwari, A. K.; Hazra, B.
2017-08-01
In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using recursive principal component analysis (RPCA) in conjunction with online damage indicators is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal modes in online using the rank-one perturbation method, and subsequently utilized to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/nonlinear-states that indicate damage. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. An online condition indicator (CI) based on the L2 norm of the error between actual response and the response projected using recursive eigenvector matrix updates over successive iterations is proposed. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data. The proposed CI, named recursive residual error, is also adopted for simultaneous spatio-temporal damage detection. Numerical simulations performed on five-degree of freedom nonlinear system under white noise and El Centro excitations, with different levels of nonlinearity simulating the damage scenarios, demonstrate the robustness of the proposed algorithm. Successful results obtained from practical case studies involving experiments performed on a cantilever beam subjected to earthquake excitation, for full sensors and underdetermined cases; and data from recorded responses of the UCLA Factor building (full data and its subset) demonstrate the efficacy of the proposed methodology as an ideal candidate for real-time, reference free structural health monitoring.
Android Malware Classification Using K-Means Clustering Algorithm
NASA Astrophysics Data System (ADS)
Hamid, Isredza Rahmi A.; Syafiqah Khalid, Nur; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Chai Wen, Chuah
2017-08-01
Malware was designed to gain access or damage a computer system without user notice. Besides, attacker exploits malware to commit crime or fraud. This paper proposed Android malware classification approach based on K-Means clustering algorithm. We evaluate the proposed model in terms of accuracy using machine learning algorithms. Two datasets were selected to demonstrate the practicing of K-Means clustering algorithms that are Virus Total and Malgenome dataset. We classify the Android malware into three clusters which are ransomware, scareware and goodware. Nine features were considered for each types of dataset such as Lock Detected, Text Detected, Text Score, Encryption Detected, Threat, Porn, Law, Copyright and Moneypak. We used IBM SPSS Statistic software for data classification and WEKA tools to evaluate the built cluster. The proposed K-Means clustering algorithm shows promising result with high accuracy when tested using Random Forest algorithm.
Dobrovolskaia, Marina A; McNeil, Scott E
2015-07-01
Clinical translation of nucleic acid-based therapeutics (NATs) is hampered by assorted challenges in immunotoxicity, hematotoxicity, pharmacokinetics, toxicology and formulation. Nanotechnology-based platforms are being considered to help address some of these challenges due to the nanoparticles' ability to change drug biodistribution, stability, circulation half-life, route of administration and dosage. Addressing toxicology and pharmacology concerns by various means including NATs reformulation using nanotechnology-based carriers has been reviewed before. However, little attention was given to the immunological and hematological issues associated with nanotechnology reformulation. This review focuses on application of nanotechnology carriers for delivery of various types of NATs, and how reformulation using nanoparticles affects immunological and hematological toxicities of this promising class of therapeutic agents. NATs share several immunological and hematological toxicities with common nanotechnology carriers. In order to avoid synergy or exaggeration of undesirable immunological and hematological effects of NATs by a nanocarrier, it is critical to consider the immunological compatibility of the nanotechnology platform and its components. Since receptors sensing nucleic acids are located essentially in all cellular compartments, a strategy for developing a nanoformulation with reduced immunotoxicity should first focus on precise delivery to the target site/cells and then on optimizing intracellular distribution.
A differentiable reformulation for E-optimal design of experiments in nonlinear dynamic biosystems.
Telen, Dries; Van Riet, Nick; Logist, Flip; Van Impe, Jan
2015-06-01
Informative experiments are highly valuable for estimating parameters in nonlinear dynamic bioprocesses. Techniques for optimal experiment design ensure the systematic design of such informative experiments. The E-criterion which can be used as objective function in optimal experiment design requires the maximization of the smallest eigenvalue of the Fisher information matrix. However, one problem with the minimal eigenvalue function is that it can be nondifferentiable. In addition, no closed form expression exists for the computation of eigenvalues of a matrix larger than a 4 by 4 one. As eigenvalues are normally computed with iterative methods, state-of-the-art optimal control solvers are not able to exploit automatic differentiation to compute the derivatives with respect to the decision variables. In the current paper a reformulation strategy from the field of convex optimization is suggested to circumvent these difficulties. This reformulation requires the inclusion of a matrix inequality constraint involving positive semidefiniteness. In this paper, this positive semidefiniteness constraint is imposed via Sylverster's criterion. As a result the maximization of the minimum eigenvalue function can be formulated in standard optimal control solvers through the addition of nonlinear constraints. The presented methodology is successfully illustrated with a case study from the field of predictive microbiology. Copyright © 2015. Published by Elsevier Inc.
Kinetics of binary nucleation of vapors in size and composition space.
Fisenko, Sergey P; Wilemski, Gerald
2004-11-01
We reformulate the kinetic description of binary nucleation in the gas phase using two natural independent variables: the total number of molecules g and the molar composition x of the cluster. The resulting kinetic equation can be viewed as a two-dimensional Fokker-Planck equation describing the simultaneous Brownian motion of the clusters in size and composition space. Explicit expressions for the Brownian diffusion coefficients in cluster size and composition space are obtained. For characterization of binary nucleation in gases three criteria are established. These criteria establish the relative importance of the rate processes in cluster size and composition space for different gas phase conditions and types of liquid mixtures. The equilibrium distribution function of the clusters is determined in terms of the variables g and x. We obtain an approximate analytical solution for the steady-state binary nucleation rate that has the correct limit in the transition to unary nucleation. To further illustrate our description, the nonequilibrium steady-state cluster concentrations are found by numerically solving the reformulated kinetic equation. For the reformulated transient problem, the relaxation or induction time for binary nucleation was calculated using Galerkin's method. This relaxation time is affected by processes in both size and composition space, but the contributions from each process can be separated only approximately.
Davies, Patrick T; Martin, Meredith J
2013-11-01
Although children's security in the context of the interparental relationship has been identified as a key explanatory mechanism in pathways between family discord and child psychopathology, little is known about the inner workings of emotional security as a goal system. Thus, the objective of this paper is to describe how our reformulation of emotional security theory within an ethological and evolutionary framework may advance the characterization of the architecture and operation of emotional security and, in the process, cultivate sustainable growing points in developmental psychopathology. The first section of the paper describes how children's security in the interparental relationship is organized around a distinctive behavioral system designed to defend against interpersonal threat. Building on this evolutionary foundation for emotional security, the paper offers an innovative taxonomy for identifying qualitatively different ways children try to preserve their security and its innovative implications for more precisely informing understanding of the mechanisms in pathways between family and developmental precursors and children's trajectories of mental health. In the final section, the paper highlights the potential of the reformulation of emotional security theory to stimulate new generations of research on understanding how children defend against social threats in ecologies beyond the interparental dyad, including both familial and extrafamilial settings.
Lou, Wendy; L’Abbe, Mary R.
2017-01-01
To align with broader public health initiatives, reformulation of products to be lower in sugars requires interventions that also aim to reduce calorie contents. Currently available foods and beverages with a range of nutrient levels can be used to project successful reformulation opportunities. The objective of this study was to examine the relationship between free sugars and calorie levels in Canadian prepackaged foods and beverages. This study was a cross-sectional analysis of the University of Toronto’s 2013 Food Label Database, limited to major sources of total sugar intake in Canada (n = 6755). Penalized B-spline regression modelling was used to examine the relationship between free sugar levels (g/100 g or 100 mL) and caloric density (kcal/100 g or 10mL), by subcategory. Significant relationships were observed for only 3 of 5 beverage subcategories and for 14 of 32 food subcategories. Most subcategories demonstrated a positive trend with varying magnitude, however, results were not consistent across related subcategories (e.g., dairy-based products). Findings highlight potential areas of concern for reformulation, and the need for innovative solutions to ensure free sugars are reduced in products within the context of improving overall nutritional quality of the diet. PMID:28872586
Bruins, Maaike J.; Dötsch-Klerk, Mariska; Matthee, Joep; Kearney, Mary; van Elk, Kathelijn; Weber, Peter; Eggersdorfer, Manfred
2015-01-01
Hypertension is a major modifiable risk factor for cardiovascular disease and mortality, which could be lowered by reducing dietary sodium. The potential health impact of a product reformulation in the Netherlands was modelled, selecting packaged soups containing on average 25% less sodium as an example of an achievable product reformulation when implemented gradually. First, the blood pressure lowering resulting from sodium intake reduction was modelled. Second, the predicted blood pressure lowering was translated into potentially preventable incidence and mortality cases from stroke, acute myocardial infarction (AMI), angina pectoris, and heart failure (HF) implementing one year salt reduction. Finally, the potentially preventable subsequent lifetime Disability-Adjusted Life Years (DALYs) were calculated. The sodium reduction in soups might potentially reduce the incidence and mortality of stroke by approximately 0.5%, AMI and angina by 0.3%, and HF by 0.2%. The related burden of disease could be reduced by approximately 800 lifetime DALYs. This modelling approach can be used to provide insight into the potential public health impact of sodium reduction in specific food products. The data demonstrate that an achievable food product reformulation to reduce sodium can potentially benefit public health, albeit modest. When implemented across multiple product categories and countries, a significant health impact could be achieved. PMID:26393647
Collins, Marissa; Mason, Helen; O'Flaherty, Martin; Guzman-Castillo, Maria; Critchley, Julia; Capewell, Simon
2014-07-01
Dietary salt intake has been causally linked to high blood pressure and increased risk of cardiovascular events. Cardiovascular disease causes approximately 35% of total UK deaths, at an estimated annual cost of £30 billion. The World Health Organization and the National Institute for Health and Care Excellence have recommended a reduction in the intake of salt in people's diets. This study evaluated the cost-effectiveness of four population health policies to reduce dietary salt intake on an English population to prevent coronary heart disease (CHD). The validated IMPACT CHD model was used to quantify and compare four policies: 1) Change4Life health promotion campaign, 2) front-of-pack traffic light labeling to display salt content, 3) Food Standards Agency working with the food industry to reduce salt (voluntary), and 4) mandatory reformulation to reduce salt in processed foods. The effectiveness of these policies in reducing salt intake, and hence blood pressure, was determined by systematic literature review. The model calculated the reduction in mortality associated with each policy, quantified as life-years gained over 10 years. Policy costs were calculated using evidence from published sources. Health care costs for specific CHD patient groups were estimated. Costs were compared against a "do nothing" baseline. All policies resulted in a life-year gain over the baseline. Change4life and labeling each gained approximately 1960 life-years, voluntary reformulation 14,560 life-years, and mandatory reformulation 19,320 life-years. Each policy appeared cost saving, with mandatory reformulation offering the largest cost saving, more than £660 million. All policies to reduce dietary salt intake could gain life-years and reduce health care expenditure on coronary heart disease. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
What can the food and drink industry do to help achieve the 5% free sugars goal?
Gibson, Sigrid; Ashwell, Margaret; Arthur, Jenny; Bagley, Lindsey; Lennox, Alison; Rogers, Peter J; Stanner, Sara
2017-07-01
To contribute evidence and make recommendations to assist in achieving free sugars reduction, with due consideration to the broader picture of weight management and dietary quality. An expert workshop in July 2016 addressed options outlined in the Public Health England report 'Sugar reduction: The evidence for action' that related directly to the food industry. Panel members contributed expertise in food technology, public heath nutrition, marketing, communications, psychology and behaviour. Recommendations were directed towards reformulation, reduced portion sizes, labelling and consumer education. These were evaluated based on their feasibility, likely consumer acceptability, efficacy and cost. The panel agreed that the 5% target for energy from free sugars is unlikely to be achievable by the UK population in the near future, but a gradual reduction from average current level of intake is feasible. Progress requires collaborations between government, food industry, non-government organisations, health professionals, educators and consumers. Reformulation should start with the main contributors of free sugars in the diet, prioritising those products high in free sugars and relatively low in micronutrients. There is most potential for replacing free sugars in beverages using high-potency sweeteners and possibly via gradual reduction in sweetness levels. However, reformulation alone, with its inherent practical difficulties, will not achieve the desired reduction in free sugars. Food manufacturers and the out-of-home sector can help consumers by providing smaller portions. Labelling of free sugars would extend choice and encourage reformulation; however, government needs to assist industry by addressing current analytical and regulatory problems. There are also opportunities for multi-agency collaboration to develop tools/communications based on the Eatwell Guide, to help consumers understand the principles of a varied, healthy, balanced diet. Multiple strategies will be required to achieve a reduction in free sugars intake to attain the 5% energy target. The panel produced consensus statements with recommendations as to how this might be achieved.
NASA Technical Reports Server (NTRS)
Zalameda, Joseph N.; Burke, Eric R.; Horne, Michael R.; Bly, James B.
2015-01-01
Fatigue testing of advanced composite structures is critical to validate both structural designs and damage prediction models. In-situ inspection methods are necessary to track damage onset and growth as a function of load cycles. Passive thermography is a large area, noncontact inspection technique that is used to detect composite damage onset and growth in real time as a function of fatigue cycles. The thermal images are acquired in synchronicity to the applied compressive load using a dual infrared camera acquisition system for full (front and back) coverage. Image processing algorithms are investigated to increase defect contrast areas. The thermal results are compared to non-immersion ultrasound inspections and acoustic emission data.
Systematic review of dietary salt reduction policies: Evidence for an effectiveness hierarchy?
Hyseni, Lirije; Elliot-Green, Alex; Lloyd-Williams, Ffion; Kypridemos, Chris; O'Flaherty, Martin; McGill, Rory; Orton, Lois; Bromley, Helen; Cappuccio, Francesco P; Capewell, Simon
2017-01-01
Non-communicable disease (NCD) prevention strategies now prioritise four major risk factors: food, tobacco, alcohol and physical activity. Dietary salt intake remains much higher than recommended, increasing blood pressure, cardiovascular disease and stomach cancer. Substantial reductions in salt intake are therefore urgently needed. However, the debate continues about the most effective approaches. To inform future prevention programmes, we systematically reviewed the evidence on the effectiveness of possible salt reduction interventions. We further compared "downstream, agentic" approaches targeting individuals with "upstream, structural" policy-based population strategies. We searched six electronic databases (CDSR, CRD, MEDLINE, SCI, SCOPUS and the Campbell Library) using a pre-piloted search strategy focussing on the effectiveness of population interventions to reduce salt intake. Retrieved papers were independently screened, appraised and graded for quality by two researchers. To facilitate comparisons between the interventions, the extracted data were categorised using nine stages along the agentic/structural continuum, from "downstream": dietary counselling (for individuals, worksites or communities), through media campaigns, nutrition labelling, voluntary and mandatory reformulation, to the most "upstream" regulatory and fiscal interventions, and comprehensive strategies involving multiple components. After screening 2,526 candidate papers, 70 were included in this systematic review (49 empirical studies and 21 modelling studies). Some papers described several interventions. Quality was variable. Multi-component strategies involving both upstream and downstream interventions, generally achieved the biggest reductions in salt consumption across an entire population, most notably 4g/day in Finland and Japan, 3g/day in Turkey and 1.3g/day recently in the UK. Mandatory reformulation alone could achieve a reduction of approximately 1.45g/day (three separate studies), followed by voluntary reformulation (-0.8g/day), school interventions (-0.7g/day), short term dietary advice (-0.6g/day) and nutrition labelling (-0.4g/day), but each with a wide range. Tax and community based counselling could, each typically reduce salt intake by 0.3g/day, whilst even smaller population benefits were derived from health education media campaigns (-0.1g/day). Worksite interventions achieved an increase in intake (+0.5g/day), however, with a very wide range. Long term dietary advice could achieve a -2g/day reduction under optimal research trial conditions; however, smaller reductions might be anticipated in unselected individuals. Comprehensive strategies involving multiple components (reformulation, food labelling and media campaigns) and "upstream" population-wide policies such as mandatory reformulation generally appear to achieve larger reductions in population-wide salt consumption than "downstream", individually focussed interventions. This 'effectiveness hierarchy' might deserve greater emphasis in future NCD prevention strategies.
Building damage assessment using airborne lidar
NASA Astrophysics Data System (ADS)
Axel, Colin; van Aardt, Jan
2017-10-01
The assessment of building damage following a natural disaster is a crucial step in determining the impact of the event itself and gauging reconstruction needs. Automatic methods for deriving damage maps from remotely sensed data are preferred, since they are regarded as being rapid and objective. We propose an algorithm for performing unsupervised building segmentation and damage assessment using airborne light detection and ranging (lidar) data. Local surface properties, including normal vectors and curvature, were used along with region growing to segment individual buildings in lidar point clouds. Damaged building candidates were identified based on rooftop inclination angle, and then damage was assessed using planarity and point height metrics. Validation of the building segmentation and damage assessment techniques were performed using airborne lidar data collected after the Haiti earthquake of 2010. Building segmentation and damage assessment accuracies of 93.8% and 78.9%, respectively, were obtained using lidar point clouds and expert damage assessments of 1953 buildings in heavily damaged regions. We believe this research presents an indication of the utility of airborne lidar remote sensing for increasing the efficiency and speed at which emergency response operations are performed.
NASA Astrophysics Data System (ADS)
Zhang, Jiu-Chang
2018-02-01
Triaxial compression tests are conducted on a quasi-brittle rock, limestone. The analyses show that elastoplastic deformation is coupled with damage. Based on the experimental investigation, a coupled elastoplastic damage model is developed within the framework of irreversible thermodynamics. The coupling effects between the plastic and damage dissipations are described by introducing an isotropic damage variable into the elastic stiffness and yield criterion. The novelty of the model is in the description of the thermodynamic force associated with damage, which is formulated as a state function of both elastic and plastic strain energies. The latter gives a full consideration on the comprehensive effects of plastic strain and stress changing processes in rock material on the development of damage. The damage criterion and potential are constructed to determine the onset and evolution of damage variable. The return mapping algorithms of the coupled model are deduced for three different inelastic corrections. Comparisons between test data and numerical simulations show that the coupled elastoplastic damage model is capable of describing the main mechanical behaviours of the quasi-brittle rock.
Detection of Non-Symmetrical Damage in Smart Plate-Like Structures
NASA Technical Reports Server (NTRS)
Blanks, H. T.; Emeric, P. R.
1998-01-01
A two-dimensional model for in-plane vibrations of a cantilever plate with a non-symmetrical damage is used in the context of defect identification in materials with piezoelectric ceramic patches bonded to their surface. These patches can act both as actuators and sensors in a self-analyzing fashion, which is a characteristic of smart materials. A Galerkin method is used to approximate the dynamic response of these structures. The natural frequency shifts due to the damage are estimated numerically and compared to experimental data obtained from tests on cantilever aluminum plate-like structures damaged at different locations with defects of different depths. The damage location and extent are determined by an enhanced least square identification method. Efficacy of the frequency shift based algorithms is demonstrated using experimental data.
Damage Evaluation Based on a Wave Energy Flow Map Using Multiple PZT Sensors
Liu, Yaolu; Hu, Ning; Xu, Hong; Yuan, Weifeng; Yan, Cheng; Li, Yuan; Goda, Riu; Alamusi; Qiu, Jinhao; Ning, Huiming; Wu, Liangke
2014-01-01
A new wave energy flow (WEF) map concept was proposed in this work. Based on it, an improved technique incorporating the laser scanning method and Betti's reciprocal theorem was developed to evaluate the shape and size of damage as well as to realize visualization of wave propagation. In this technique, a simple signal processing algorithm was proposed to construct the WEF map when waves propagate through an inspection region, and multiple lead zirconate titanate (PZT) sensors were employed to improve inspection reliability. Various damages in aluminum and carbon fiber reinforced plastic laminated plates were experimentally and numerically evaluated to validate this technique. The results show that it can effectively evaluate the shape and size of damage from wave field variations around the damage in the WEF map. PMID:24463430
Simulating Progressive Damage of Notched Composite Laminates with Various Lamination Schemes
NASA Astrophysics Data System (ADS)
Mandal, B.; Chakrabarti, A.
2017-05-01
A three dimensional finite element based progressive damage model has been developed for the failure analysis of notched composite laminates. The material constitutive relations and the progressive damage algorithms are implemented into finite element code ABAQUS using user-defined subroutine UMAT. The existing failure criteria for the composite laminates are modified by including the failure criteria for fiber/matrix shear damage and delamination effects. The proposed numerical model is quite efficient and simple compared to other progressive damage models available in the literature. The efficiency of the present constitutive model and the computational scheme is verified by comparing the simulated results with the results available in the literature. A parametric study has been carried out to investigate the effect of change in lamination scheme on the failure behaviour of notched composite laminates.
NASA Astrophysics Data System (ADS)
Kondo, Kei-Ichi; Kato, Seikou; Shibata, Akihiro; Shinohara, Toru
2015-05-01
The purpose of this paper is to review the recent progress in understanding quark confinement. The emphasis of this review is placed on how to obtain a manifestly gauge-independent picture for quark confinement supporting the dual superconductivity in the Yang-Mills theory, which should be compared with the Abelian projection proposed by 't Hooft. The basic tools are novel reformulations of the Yang-Mills theory based on change of variables extending the decomposition of the SU(N) Yang-Mills field due to Cho, Duan-Ge and Faddeev-Niemi, together with the combined use of extended versions of the Diakonov-Petrov version of the non-Abelian Stokes theorem for the SU(N) Wilson loop operator. Moreover, we give the lattice gauge theoretical versions of the reformulation of the Yang-Mills theory which enables us to perform the numerical simulations on the lattice. In fact, we present some numerical evidences for supporting the dual superconductivity for quark confinement. The numerical simulations include the derivation of the linear potential for static interquark potential, i.e., non-vanishing string tension, in which the "Abelian" dominance and magnetic monopole dominance are established, confirmation of the dual Meissner effect by measuring the chromoelectric flux tube between quark-antiquark pair, the induced magnetic-monopole current, and the type of dual superconductivity, etc. In addition, we give a direct connection between the topological configuration of the Yang-Mills field such as instantons/merons and the magnetic monopole. We show especially that magnetic monopoles in the Yang-Mills theory can be constructed in a manifestly gauge-invariant way starting from the gauge-invariant Wilson loop operator and thereby the contribution from the magnetic monopoles can be extracted from the Wilson loop in a gauge-invariant way through the non-Abelian Stokes theorem for the Wilson loop operator, which is a prerequisite for exhibiting magnetic monopole dominance for quark confinement. The Wilson loop average is calculated according to the new reformulation written in terms of new field variables obtained from the original Yang-Mills field based on change of variables. The Maximally Abelian gauge in the original Yang-Mills theory is also reproduced by taking a specific gauge fixing in the reformulated Yang-Mills theory. This observation justifies the preceding results obtained in the maximal Abelian gauge at least for gauge-invariant quantities for SU(2) gauge group, which eliminates the criticism of gauge artifact raised for the Abelian projection. The claim has been confirmed based on the numerical simulations. However, for SU(N) (N ≥ 3), such a gauge-invariant reformulation is not unique, although the extension along the line proposed by Cho, Faddeev and Niemi is possible. In fact, we have found that there are a number of possible options of the reformulations, which are discriminated by the maximal stability group H ˜ of G, while there is a unique option of H ˜ = U(1) for G = SU(2) . The maximal stability group depends on the representation of the gauge group, to that the quark source belongs. For the fundamental quark for SU(3) , the maximal stability group is U(2) , which is different from the maximal torus group U(1) × U(1) suggested from the Abelian projection. Therefore, the chromomagnetic monopole inherent in the Wilson loop operator responsible for confinement of quarks in the fundamental representation for SU(3) is the non-Abelian magnetic monopole, which is distinct from the Abelian magnetic monopole for the SU(2) case. Therefore, we claim that the mechanism for quark confinement for SU(N) (N ≥ 3) is the non-Abelian dual superconductivity caused by condensation of non-Abelian magnetic monopoles. We give some theoretical considerations and numerical results supporting this picture. Finally, we discuss some issues to be investigated in future studies.
Using Impact Modulation to Identify Loose Bolts on a Satellite
2011-10-21
for public release; distribution is unlimited the literature to be an effective damage detection method for cracks, delamination, and fatigue in...to identify loose bolts and fatigue damage using optimized sensor locations using a Support Vector Machines algorithm to classify the dam- age. Finally...48] did preliminary work which showed that VM is effective in detecting fatigue cracks in engineering components despite changes in actuator location
Damage detection in rotating machinery by means of entropy-based parameters
NASA Astrophysics Data System (ADS)
Tocarciuc, Alexandru; Bereteu, Liviu; ǎgǎnescu, Gheorghe Eugen, Dr
2014-11-01
The paper is proposing two new entropy-based parameters, namely Renyi Entropy Index (REI) and Sharma-Mittal Entropy Index (SMEI), for detecting the presence of failures (or damages) in rotating machinery, namely: belt structural damage, belt wheels misalignment, failure of the fixing bolt of the machine to its baseplate and eccentricities (i.e.: due to detaching a small piece of material or bad mounting of the rotating components of the machine). The algorithms to obtain the proposed entropy-based parameters are described and test data is used in order to assess their sensitivity. A vibration test bench is used for measuring the levels of vibration while artificially inducing damage. The deviation of the two entropy-based parameters is compared in two states of the vibration test bench: not damaged and damaged. At the end of the study, their sensitivity is compared to Shannon Entropic Index.
Classification Model for Damage Localization in a Plate Structure
NASA Astrophysics Data System (ADS)
Janeliukstis, R.; Ruchevskis, S.; Chate, A.
2018-01-01
The present study is devoted to the problem of damage localization by means of data classification. The commercial ANSYS finite-elements program was used to make a model of a cantilevered composite plate equipped with numerous strain sensors. The plate was divided into zones, and, for data classification purposes, each of them housed several points to which a point mass of magnitude 5 and 10% of plate mass was applied. At each of these points, a numerical modal analysis was performed, from which the first few natural frequencies and strain readings were extracted. The strain data for every point were the input for a classification procedure involving k nearest neighbors and decision trees. The classification model was trained and optimized by finetuning the key parameters of both algorithms. Finally, two new query points were simulated and subjected to a classification in terms of assigning a label to one of the zones of the plate, thus localizing these points. Damage localization results were compared for both algorithms and were found to be in good agreement with the actual application positions of point load.
Fault Detection of Roller-Bearings Using Signal Processing and Optimization Algorithms
Kwak, Dae-Ho; Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan
2014-01-01
This study presents a fault detection of roller bearings through signal processing and optimization techniques. After the occurrence of scratch-type defects on the inner race of bearings, variations of kurtosis values are investigated in terms of two different data processing techniques: minimum entropy deconvolution (MED), and the Teager-Kaiser Energy Operator (TKEO). MED and the TKEO are employed to qualitatively enhance the discrimination of defect-induced repeating peaks on bearing vibration data with measurement noise. Given the perspective of the execution sequence of MED and the TKEO, the study found that the kurtosis sensitivity towards a defect on bearings could be highly improved. Also, the vibration signal from both healthy and damaged bearings is decomposed into multiple intrinsic mode functions (IMFs), through empirical mode decomposition (EMD). The weight vectors of IMFs become design variables for a genetic algorithm (GA). The weights of each IMF can be optimized through the genetic algorithm, to enhance the sensitivity of kurtosis on damaged bearing signals. Experimental results show that the EMD-GA approach successfully improved the resolution of detectability between a roller bearing with defect, and an intact system. PMID:24368701
MIMO signal progressing with RLSCMA algorithm for multi-mode multi-core optical transmission system
NASA Astrophysics Data System (ADS)
Bi, Yuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya
2018-01-01
In the process of transmitting signals of multi-mode multi-core fiber, there will be mode coupling between modes. The mode dispersion will also occur because each mode has different transmission speed in the link. Mode coupling and mode dispersion will cause damage to the useful signal in the transmission link, so the receiver needs to deal received signal with digital signal processing, and compensate the damage in the link. We first analyzes the influence of mode coupling and mode dispersion in the process of transmitting signals of multi-mode multi-core fiber, then presents the relationship between the coupling coefficient and dispersion coefficient. Then we carry out adaptive signal processing with MIMO equalizers based on recursive least squares constant modulus algorithm (RLSCMA). The MIMO equalization algorithm offers adaptive equalization taps according to the degree of crosstalk in cores or modes, which eliminates the interference among different modes and cores in space division multiplexing(SDM) transmission system. The simulation results show that the distorted signals are restored efficiently with fast convergence speed.
Improved configuration control for redundant robots
NASA Technical Reports Server (NTRS)
Seraji, H.; Colbaugh, R.
1990-01-01
This article presents a singularity-robust task-prioritized reformulation of the configuration control scheme for redundant robot manipulators. This reformulation suppresses large joint velocities near singularities, at the expense of small task trajectory errors. This is achieved by optimally reducing the joint velocities to induce minimal errors in the task performance by modifying the task trajectories. Furthermore, the same framework provides a means for assignment of priorities between the basic task of end-effector motion and the user-defined additional task for utilizing redundancy. This allows automatic relaxation of the additional task constraints in favor of the desired end-effector motion, when both cannot be achieved exactly. The improved configuration control scheme is illustrated for a variety of additional tasks, and extensive simulation results are presented.
2PI effective action for the SYK model and tensor field theories
NASA Astrophysics Data System (ADS)
Benedetti, Dario; Gurau, Razvan
2018-05-01
We discuss the two-particle irreducible (2PI) effective action for the SYK model and for tensor field theories. For the SYK model the 2PI effective action reproduces the bilocal reformulation of the model without using replicas. In general tensor field theories the 2PI formalism is the only way to obtain a bilocal reformulation of the theory, and as such is a precious instrument for the identification of soft modes and for possible holographic interpretations. We compute the 2PI action for several models, and push it up to fourth order in the 1 /N expansion for the model proposed by Witten in [1], uncovering a one-loop structure in terms of an auxiliary bilocal action.
Downs, Shauna M; Thow, Anne Marie; Ghosh-Jerath, Suparna; Leeder, Stephen R
2015-01-01
The national Government of India has published draft regulation proposing a 5% upper limit of trans fat in partially hydrogenated vegetable oils (PHVOs). Global recommendations are to replace PHVOs with unsaturated fat but it is not known whether this will be feasible in India. We systematically identified policy options to address the three major underlying agricultural sector issues that influence reformulation with healthier oils: the low productivity of domestically produced oilseeds leading to a reliance on palm oil imports, supply chain wastage, and the low availability of oils high in unsaturated fats. Strengthening domestic supply chains in India will be necessary to maximize health gains associated with product reformulation.
Plans for a Next Generation Space-Based Gravitational-Wave Observatory (NGO)
NASA Technical Reports Server (NTRS)
Livas, Jeffrey C.; Stebbins, Robin T.; Jennrich, Oliver
2012-01-01
The European Space Agency (ESA) is currently in the process of selecting a mission for the Cosmic Visions Program. A space-based gravitational wave observatory in the low-frequency band (0.0001 - 1 Hz) of the gravitational wave spectrum is one of the leading contenders. This low frequency band has a rich spectrum of astrophysical sources, and the LISA concept has been the key mission to cover this science for over twenty years. Tight budgets have recently forced ESA to consider a reformulation of the LISA mission concept that wi" allow the Cosmic Visions Program to proceed on schedule either with the US as a minority participant, or independently of the US altogether. We report on the status of these reformulation efforts.
Reformulating the Schrödinger equation as a Shabat-Zakharov system
NASA Astrophysics Data System (ADS)
Boonserm, Petarpa; Visser, Matt
2010-02-01
We reformulate the second-order Schrödinger equation as a set of two coupled first-order differential equations, a so-called "Shabat-Zakharov system" (sometimes called a "Zakharov-Shabat" system). There is considerable flexibility in this approach, and we emphasize the utility of introducing an "auxiliary condition" or "gauge condition" that is used to cut down the degrees of freedom. Using this formalism, we derive the explicit (but formal) general solution to the Schrödinger equation. The general solution depends on three arbitrarily chosen functions, and a path-ordered exponential matrix. If one considers path ordering to be an "elementary" process, then this represents complete quadrature, albeit formal, of the second-order linear ordinary differential equation.
Reformulation of Possio's kernel with application to unsteady wind tunnel interference
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Golberg, M. A.
1980-01-01
An efficient method for computing the Possio kernel has remained elusive up to the present time. In this paper the Possio is reformulated so that it can be computed accurately using existing high precision numerical quadrature techniques. Convergence to the correct values is demonstrated and optimization of the integration procedures is discussed. Since more general kernels such as those associated with unsteady flows in ventilated wind tunnels are analytic perturbations of the Possio free air kernel, a more accurate evaluation of their collocation matrices results with an exponential improvement in convergence. An application to predicting frequency response of an airfoil-trailing edge control system in a wind tunnel compared with that in free air is given showing strong interference effects.
The diagnostic status of homosexuality in DSM-III: a reformulation of the issues.
Spitzer, R L
1981-02-01
In 1973 homosexuality per se was removed from the DSM-II classification of mental disorders and replaced by the category Sexual Orientation Disturbance. This represented a compromise between the view that preferential homosexuality is invariably a mental disorder and the view that it is merely a normal sexual variant. While the 1973 DSM-II controversy was highly public, more recently a related but less public controversy involved what became the DSM-III category of Ego-dystonic Homosexuality. The author presents the DSM-III controversy and a reformulation of the issues involved in the diagnostic status of homosexuality. He argues that what is at issue is a value judgment about heterosexuality, rather than a factual dispute about homosexuality.
Develop an piezoelectric sensing based on SHM system for nuclear dry storage system
NASA Astrophysics Data System (ADS)
Ma, Linlin; Lin, Bin; Sun, Xiaoyi; Howden, Stephen; Yu, Lingyu
2016-04-01
In US, there are over 1482 dry cask storage system (DCSS) in use storing 57,807 fuel assemblies. Monitoring is necessary to determine and predict the degradation state of the systems and structures. Therefore, nondestructive monitoring is in urgent need and must be integrated into the fuel cycle to quantify the "state of health" for the safe operation of nuclear power plants (NPP) and radioactive waste storage systems (RWSS). Innovative approaches are desired to evaluate the degradation and damage of used fuel containers under extended storage. Structural health monitoring (SHM) is an emerging technology that uses in-situ sensory system to perform rapid nondestructive detection of structural damage as well as long-term integrity monitoring. It has been extensively studied in aerospace engineering over the past two decades. This paper presents the development of a SHM and damage detection methodology based on piezoelectric sensors technologies for steel canisters in nuclear dry cask storage system. Durability and survivability of piezoelectric sensors under temperature influence are first investigated in this work by evaluating sensor capacitance and electromechanical admittance. Toward damage detection, the PES are configured in pitch catch setup to transmit and receive guided waves in plate-like structures. When the inspected structure has damage such as a surface defect, the incident guided waves will be reflected or scattered resulting in changes in the wave measurements. Sparse array algorithm is developed and implemented using multiple sensors to image the structure. The sparse array algorithm is also evaluated at elevated temperature.
Self-learning health monitoring algorithm in composite structures
NASA Astrophysics Data System (ADS)
Grassia, Luigi; Iannone, Michele; Califano, America; D'Amore, Alberto
2018-02-01
The paper describes a system that it is able of monitoring the health state of a composite structure in real time. The hardware of the system consists of a wire of strain sensors connected to a control unit. The software of the system elaborates the strain data and in real time is able to detect the presence of an eventual damage of the structures monitored with the strain sensors. The algorithm requires as input only the strains of the monitored structured measured on real time, i.e. those strains coming from the deformations of the composite structure due to the working loads. The health monitoring system does not require any additional device to interrogate the structure as often used in the literature, instead it is based on a self-learning procedure. The strain data acquired when the structure is healthy are used to set up the correlations between the strain in different positions of structure by means of neural network. Once the correlations between the strains in different position have been set up, these correlations act as a fingerprint of the healthy structure. In case of damage the correlation between the strains in the position of the structure near the damage will change due to the change of the stiffness of the structure caused by the damage. The developed software is able to recognize the change of the transfer function between the strains and consequently is able to detect the damage.
NASA Astrophysics Data System (ADS)
Farooq, Umar; Myler, Peter
This work is concerned with physical testing of carbon fibrous laminated composite panels with low velocity drop-weight impacts from flat and round nose impactors. Eight, sixteen, and twenty-four ply panels were considered. Non-destructive damage inspections of tested specimens were conducted to approximate impact-induced damage. Recorded data were correlated to load-time, load-deflection, and energy-time history plots to interpret impact induced damage. Data filtering techniques were also applied to the noisy data that unavoidably generate due to limitations of testing and logging systems. Built-in, statistical, and numerical filters effectively predicted load thresholds for eight and sixteen ply laminates. However, flat nose impact of twenty-four ply laminates produced clipped data that can only be de-noised involving oscillatory algorithms. Data filtering and extrapolation of such data have received rare attention in the literature that needs to be investigated. The present work demonstrated filtering and extrapolation of the clipped data using Fast Fourier Convolution algorithm to predict load thresholds. Selected results were compared to the damage zones identified with C-scan and acceptable agreements have been observed. Based on the results it is proposed that use of advanced data filtering and analysis methods to data collected by the available resources has effectively enhanced data interpretations without resorting to additional resources. The methodology could be useful for efficient and reliable data analysis and impact-induced damage prediction of similar cases' data.
Sepehrinezhad, Alireza; Toufigh, Vahab
2018-05-25
Ultrasonic wave attenuation is an effective descriptor of distributed damage in inhomogeneous materials. Methods developed to measure wave attenuation have the potential to provide an in-site evaluation of existing concrete structures insofar as they are accurate and time-efficient. In this study, material classification and distributed damage evaluation were investigated based on the sinusoidal modeling of the response from the through-transmission ultrasonic tests on polymer concrete specimens. The response signal was modeled as single or the sum of damping sinusoids. Due to the inhomogeneous nature of concrete materials, model parameters may vary from one specimen to another. Therefore, these parameters are not known in advance and should be estimated while the response signal is being received. The modeling procedure used in this study involves a data-adaptive algorithm to estimate the parameters online. Data-adaptive algorithms are used due to a lack of knowledge of the model parameters. The damping factor was estimated as a descriptor of the distributed damage. The results were compared in two different cases as follows: (1) constant excitation frequency with varying concrete mixtures and (2) constant mixture with varying excitation frequencies. The specimens were also loaded up to their ultimate compressive strength to investigate the effect of distributed damage in the response signal. The results of the estimation indicated that the damping was highly sensitive to the change in material inhomogeneity, even in comparable mixtures. In addition to the proposed method, three methods were employed to compare the results based on their accuracy in the classification of materials and the evaluation of the distributed damage. It is shown that the estimated damping factor is not only sensitive to damage in the final stages of loading, but it is also applicable in evaluating micro damages in the earlier stages providing a reliable descriptor of damage. In addition, the modified amplitude ratio method is introduced as an improvement of the classical method. The proposed methods were validated to be effective descriptors of distributed damage. The presented models were also in good agreement with the experimental data. Copyright © 2018 Elsevier B.V. All rights reserved.
Damage tolerance of bonded composite aircraft repairs for metallic structures
NASA Astrophysics Data System (ADS)
Clark, Randal John
This thesis describes the development and validation of methods for damage tolerance substantiation of bonded composite repairs applied to cracked plates. This technology is used to repair metal aircraft structures, offering improvements in fatigue life, cost, manufacturability, and inspectability when compared to riveted repairs. The work focuses on the effects of plate thickness and bending on repair life, and covers fundamental aspects of fracture and fatigue of cracked plates and bonded joints. This project falls under the UBC Bonded Composite Repair Program, which has the goal of certification and widespread use of bonded repairs in civilian air transportation. This thesis analyses the plate thickness and transverse stress effects on fracture of repaired plates and the related problem of induced geometrically nonlinear bending in unbalanced (single-sided) repairs. The author begins by developing a classification scheme for assigning repair damage tolerance substantiation requirements based upon stress-based adhesive fracture/fatigue criteria and the residual strength of the original structure. The governing equations for bending of cracked plates are then reformulated and line-spring models are developed for linear and nonlinear coupled bending and extension of reinforced cracks. The line-spring models were used to correct the Wang and Rose energy method for the determination of the long-crack limit stress intensity, and to develop a new interpolation model for repaired cracks of arbitrary length. The analysis was validated using finite element models and data from mechanical tests performed on hybrid bonded joints and repair specimens that are representative of an in-service repair. This work will allow designers to evaluate the damage tolerance of the repaired plate, the adhesive, and the composite patch, which is an airworthiness requirement under FAR (Federal Aviation Regulations) 25.571. The thesis concludes by assessing the remaining barriers to certification of bonded repairs, discussing the results of the analysis, and making suggestions for future work. The developed techniques should also prove to be useful for the analysis of fibre-reinforced metal laminates and other layered structures. Some concepts are general and should be useful in the analysis of any plate with large in-plane stress gradients that lead to significant transverse stresses.
Negative Selection Algorithm for Aircraft Fault Detection
NASA Technical Reports Server (NTRS)
Dasgupta, D.; KrishnaKumar, K.; Wong, D.; Berry, M.
2004-01-01
We investigated a real-valued Negative Selection Algorithm (NSA) for fault detection in man-in-the-loop aircraft operation. The detection algorithm uses body-axes angular rate sensory data exhibiting the normal flight behavior patterns, to generate probabilistically a set of fault detectors that can detect any abnormalities (including faults and damages) in the behavior pattern of the aircraft flight. We performed experiments with datasets (collected under normal and various simulated failure conditions) using the NASA Ames man-in-the-loop high-fidelity C-17 flight simulator. The paper provides results of experiments with different datasets representing various failure conditions.
Lu, Guangtao; Feng, Qian; Li, Yourong; Wang, Hao; Song, Gangbing
2017-01-01
During the propagation of ultrasonic waves in structures, there is usually energy loss due to ultrasound energy diffusion and dissipation. The aim of this research is to characterize the ultrasound energy diffusion that occurs due to small-size damage on an aluminum plate using piezoceramic transducers, for the future purpose of developing a damage detection algorithm. The ultrasonic energy diffusion coefficient is related to the damage distributed in the medium. Meanwhile, the ultrasonic energy dissipation coefficient is related to the inhomogeneity of the medium. Both are usually employed to describe the characteristics of ultrasound energy diffusion. The existence of multimodes of Lamb waves in metallic plate structures results in the asynchronous energy transport of different modes. The mode of Lamb waves has a great influence on ultrasound energy diffusion as a result, and thus has to be chosen appropriately. In order to study the characteristics of ultrasound energy diffusion in metallic plate structures, an experimental setup of an aluminum plate with a through-hole, whose diameter varies from 0.6 mm to 1.2 mm, is used as the test specimen with the help of piezoceramic transducers. The experimental results of two categories of damages at different locations reveal that the existence of damage changes the energy transport between the actuator and the sensor. Also, when there is only one dominate mode of Lamb wave excited in the structure, the ultrasound energy diffusion coefficient decreases approximately linearly with the diameter of the simulated damage. Meanwhile, the ultrasonic energy dissipation coefficient increases approximately linearly with the diameter of the simulated damage. However, when two or more modes of Lamb waves are excited, due to the existence of different group velocities between the different modes, the energy transport of the different modes is asynchronous, and the ultrasonic energy diffusion is not strictly linear with the size of the damage. Therefore, it is recommended that only one dominant mode of Lamb wave should be excited during the characterization process, in order to ensure that the linear relationship between the damage size and the characteristic parameters is maintained. In addition, the findings from this paper demonstrate the potential of developing future damage detection algorithms using the linear relationships between damage size and the ultrasound energy diffusion coefficient or ultrasonic energy dissipation coefficient when a single dominant mode is excited. PMID:29207530
Data fusion of multi-scale representations for structural damage detection
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-01-01
Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.
[Trampoline accident with anterior knee dislocation caused popliteal artery disruption].
Pedersen, Peter Heide; Høgh, Annette Langager
2011-10-17
Only a few reports describe the risk of neurovascular damage following knee dislocation while trampolining. A 16 year-old male in a trampoline accident, sustained multi-ligament damage and occlusion of the popliteal artery. The occlusion did not show clinically until 24 hours after the trauma. He underwent vascular surgery (short saphenous bypass). We recommend implementing algorithms, for the management of suspected knee dislocation and possible accompanying neurovascular injuries in all trauma centers.
TPS In-Flight Health Monitoring Project Progress Report
NASA Technical Reports Server (NTRS)
Kostyk, Chris; Richards, Lance; Hudston, Larry; Prosser, William
2007-01-01
Progress in the development of new thermal protection systems (TPS) is reported. New approaches use embedded lightweight, sensitive, fiber optic strain and temperature sensors within the TPS. Goals of the program are to develop and demonstrate a prototype TPS health monitoring system, develop a thermal-based damage detection algorithm, characterize limits of sensor/system performance, and develop ea methodology transferable to new designs of TPS health monitoring systems. Tasks completed during the project helped establish confidence in understanding of both test setup and the model and validated system/sensor performance in a simple TPS structure. Other progress included complete initial system testing, commencement of the algorithm development effort, generation of a damaged thermal response characteristics database, initial development of a test plan for integration testing of proven FBG sensors in simple TPS structure, and development of partnerships to apply the technology.
Guaranteeing robustness of structural condition monitoring to environmental variability
NASA Astrophysics Data System (ADS)
Van Buren, Kendra; Reilly, Jack; Neal, Kyle; Edwards, Harry; Hemez, François
2017-01-01
Advances in sensor deployment and computational modeling have allowed significant strides to be recently made in the field of Structural Health Monitoring (SHM). One widely used SHM strategy is to perform a vibration analysis where a model of the structure's pristine (undamaged) condition is compared with vibration response data collected from the physical structure. Discrepancies between model predictions and monitoring data can be interpreted as structural damage. Unfortunately, multiple sources of uncertainty must also be considered in the analysis, including environmental variability, unknown model functional forms, and unknown values of model parameters. Not accounting for these sources of uncertainty can lead to false-positives or false-negatives in the structural condition assessment. To manage the uncertainty, we propose a robust SHM methodology that combines three technologies. A time series algorithm is trained using "baseline" data to predict the vibration response, compare predictions to actual measurements collected on a potentially damaged structure, and calculate a user-defined damage indicator. The second technology handles the uncertainty present in the problem. An analysis of robustness is performed to propagate this uncertainty through the time series algorithm and obtain the corresponding bounds of variation of the damage indicator. The uncertainty description and robustness analysis are both inspired by the theory of info-gap decision-making. Lastly, an appropriate "size" of the uncertainty space is determined through physical experiments performed in laboratory conditions. Our hypothesis is that examining how the uncertainty space changes throughout time might lead to superior diagnostics of structural damage as compared to only monitoring the damage indicator. This methodology is applied to a portal frame structure to assess if the strategy holds promise for robust SHM. (Publication approved for unlimited, public release on October-28-2015, LA-UR-15-28442, unclassified.)
Improving the resolution for Lamb wave testing via a smoothed Capon algorithm
NASA Astrophysics Data System (ADS)
Cao, Xuwei; Zeng, Liang; Lin, Jing; Hua, Jiadong
2018-04-01
Lamb wave testing is promising for damage detection and evaluation in large-area structures. The dispersion of Lamb waves is often unavoidable, restricting testing resolution and making the signal hard to interpret. A smoothed Capon algorithm is proposed in this paper to estimate the accurate path length of each wave packet. In the algorithm, frequency domain whitening is firstly used to obtain the transfer function in the bandwidth of the excitation pulse. Subsequently, wavenumber domain smoothing is employed to reduce the correlation between wave packets. Finally, the path lengths are determined by distance domain searching based on the Capon algorithm. Simulations are applied to optimize the number of smoothing times. Experiments are performed on an aluminum plate consisting of two simulated defects. The results demonstrate that spatial resolution is improved significantly by the proposed algorithm.
Locating and decoding barcodes in fuzzy images captured by smart phones
NASA Astrophysics Data System (ADS)
Deng, Wupeng; Hu, Jiwei; Liu, Quan; Lou, Ping
2017-07-01
With the development of barcodes for commercial use, people's requirements for detecting barcodes by smart phone become increasingly pressing. The low quality of barcode image captured by mobile phone always affects the decoding and recognition rates. This paper focuses on locating and decoding EAN-13 barcodes in fuzzy images. We present a more accurate locating algorithm based on segment length and high fault-tolerant rate algorithm for decoding barcodes. Unlike existing approaches, location algorithm is based on the edge segment length of EAN -13 barcodes, while our decoding algorithm allows the appearance of fuzzy region in barcode image. Experimental results are performed on damaged, contaminated and scratched digital images, and provide a quite promising result for EAN -13 barcode location and decoding.
Bowen, Raffick A R; Vu, Chi; Remaley, Alan T; Hortin, Glen L; Csako, Gyorgy
2007-03-01
Besides total triiodothyronine (TT3), total free fatty acids (FFA) concentrations were higher with serum separator tube (SST) than Vacuette tubes. The effects of surfactant, rubber stopper, and separator gel from various tubes were investigated on FFA, beta-hydroxybutyrate (beta-HB), and TT3 with 8 different tube types in blood specimens of apparently healthy volunteers. Compared to Vacuette tubes, serum FFA and TT3 concentrations were significantly higher in SST than glass tubes. Reformulated SST eliminated the increase in TT3 but not FFA. No significant difference was observed for beta-HB concentration among tube types. Surfactant and rubber stoppers from the different tube types significantly increased TT3 but not FFA and beta-HB concentrations. Agitation of whole blood but not serum or plasma specimens with separator gel from SST, reformulated SST and plasma preparation tube (PPT) tubes compared to Vacuette tubes gave higher FFA but not beta-HB levels. Unidentified component(s) from the separator gel in SST, reformulated SST and PPT tubes cause falsely high FFA concentration. In contrast to TT3, falsely high FFA results require exposure of whole blood and not serum to tube constituent(s). The approach employed here may serve as a model for assessing interference(s) from tube constituent(s).
Yon, Bethany A; Johnson, Rachel K
2014-03-01
The United States Department of Agriculture's (USDA) new nutrition standards for school meals include sweeping changes setting upper limits on calories served and limit milk offerings to low fat or fat-free and, if flavored, only fat-free. Milk processors are lowering the calories in flavored milks. As changes to milk impact school lunch participation and milk consumption, it is important to know the impact of these modifications. Elementary and middle schools from 17 public school districts that changed from standard flavored milk (160-180 kcal/8 oz) to lower calorie flavored milk (140-150 kcal/8 oz) between 2008 and 2009 were enrolled. Milk shipment and National School Lunch Program (NSLP) participation rates were collected for 3 time periods over 12 months (pre-reformulation, at the time of reformulation, and after reformulation). Linear mixed models were used with adjustments for free/reduced meal eligibility. No changes were seen in shipment of flavored milk or all milk, including unflavored. The NSLP participation rates dropped when lower calorie flavored milk was first offered, but recovered over time. While school children appear to accept lower calorie flavored milk, further monitoring is warranted as most of the flavored milks offered were not fat-free as was required by USDA as of fall 2012. © 2014, American School Health Association.
NASA Technical Reports Server (NTRS)
Clark-Ingram, Marceia
2010-01-01
Brominated Flame Retardants (BFRs) are widely used in the manufacture of electrical and electronic components and as additives in formulations for foams, plastics and rubbers. The United States (US) and the European Union (EU)have increased regulation and monitoring of of targeted BFRs, such as Polybrominated Diphenyl Ethers (PBDEs) due to the bioaccumulative effects in humans and animals. In response, manufacturers and vendors of BFR-containing materials are changing flame-retardant additives, sometimes without notifying BFR users. In some instances, Deca-bromodiphenylether (Deca-BDE) and other families of flame retardants are being used as replacement flame retardants for penta-BDE and octa-BDE. The reformulation of the BFR-containing material typically results in the removal of the targeted PBDE and replacement with a non-PBDE chemical or non-targeted PBDE. Many users of PBDE -based materials are concerned that vendors will perform reformulation and not inform the end user. Materials performance such as flammability, adhesion , and tensile strength may be altered due to reformulation. The requalification of newly formulated materials may be required, or replacement materials may have to be identified and qualified. The Shuttle Enviornmental Assurance (SEA) team indentified a risk to the Space Shuttle Program associated with the possibility that targeted PBDEs may be replaced without notification. Resultant decreases in flame retardancy, Liquid Oxygen (LOX) compatibility, or material performance could have serious consequences.
Dietary Impact of Adding Potassium Chloride to Foods as a Sodium Reduction Technique.
van Buren, Leo; Dötsch-Klerk, Mariska; Seewi, Gila; Newson, Rachel S
2016-04-21
Potassium chloride is a leading reformulation technology for reducing sodium in food products. As, globally, sodium intake exceeds guidelines, this technology is beneficial; however, its potential impact on potassium intake is unknown. Therefore, a modeling study was conducted using Dutch National Food Survey data to examine the dietary impact of reformulation (n = 2106). Product-specific sodium criteria, to enable a maximum daily sodium chloride intake of 5 grams/day, were applied to all foods consumed in the survey. The impact of replacing 20%, 50% and 100% of sodium chloride from each product with potassium chloride was modeled. At baseline median, potassium intake was 3334 mg/day. An increase in the median intake of potassium of 453 mg/day was seen when a 20% replacement was applied, 674 mg/day with a 50% replacement scenario and 733 mg/day with a 100% replacement scenario. Reformulation had the largest impact on: bread, processed fruit and vegetables, snacks and processed meat. Replacement of sodium chloride by potassium chloride, particularly in key contributing product groups, would result in better compliance to potassium intake guidelines (3510 mg/day). Moreover, it could be considered safe for the general adult population, as intake remains compliant with EFSA guidelines. Based on current modeling potassium chloride presents as a valuable, safe replacer for sodium chloride in food products.
SIMPLE GREEN® 2013 Reformulation
Technical product bulletin: this surface washing agent used in oil spill cleanups is equally effective in fresh water, estuarine, and marine environments at all temperatures. Spray directly on surface of oil.
Damage localization of marine risers using time series of vibration signals
NASA Astrophysics Data System (ADS)
Liu, Hao; Yang, Hezhen; Liu, Fushun
2014-10-01
Based on dynamic response signals a damage detection algorithm is developed for marine risers. Damage detection methods based on numerous modal properties have encountered issues in the researches in offshore oil community. For example, significant increase in structure mass due to marine plant/animal growth and changes in modal properties by equipment noise are not the result of damage for riser structures. In an attempt to eliminate the need to determine modal parameters, a data-based method is developed. The implementation of the method requires that vibration data are first standardized to remove the influence of different loading conditions and the autoregressive moving average (ARMA) model is used to fit vibration response signals. In addition, a damage feature factor is introduced based on the autoregressive (AR) parameters. After that, the Euclidean distance between ARMA models is subtracted as a damage indicator for damage detection and localization and a top tensioned riser simulation model with different damage scenarios is analyzed using the proposed method with dynamic acceleration responses of a marine riser as sensor data. Finally, the influence of measured noise is analyzed. According to the damage localization results, the proposed method provides accurate damage locations of risers and is robust to overcome noise effect.
Cascaded Optimization for a Persistent Data Ferrying Unmanned Aircraft
NASA Astrophysics Data System (ADS)
Carfang, Anthony
This dissertation develops and assesses a cascaded method for designing optimal periodic trajectories and link schedules for an unmanned aircraft to ferry data between stationary ground nodes. This results in a fast solution method without the need to artificially constrain system dynamics. Focusing on a fundamental ferrying problem that involves one source and one destination, but includes complex vehicle and Radio-Frequency (RF) dynamics, a cascaded structure to the system dynamics is uncovered. This structure is exploited by reformulating the nonlinear optimization problem into one that reduces the independent control to the vehicle's motion, while the link scheduling control is folded into the objective function and implemented as an optimal policy that depends on candidate motion control. This formulation is proven to maintain optimality while reducing computation time in comparison to traditional ferry optimization methods. The discrete link scheduling problem takes the form of a combinatorial optimization problem that is known to be NP-Hard. A derived necessary condition for optimality guides the development of several heuristic algorithms, specifically the Most-Data-First Algorithm and the Knapsack Adaptation. These heuristics are extended to larger ferrying scenarios, and assessed analytically and through Monte Carlo simulation, showing better throughput performance in the same order of magnitude of computation time in comparison to other common link scheduling policies. The cascaded optimization method is implemented with a novel embedded software system on a small, unmanned aircraft to validate the simulation results with field experiments. To address the sensitivity of results on trajectory tracking performance, a system that combines motion and link control with waypoint-based navigation is developed and assessed through field experiments. The data ferrying algorithms are further extended by incorporating a Gaussian process to opportunistically learn the RF environment. By continuously improving RF models, the cascaded planner can continually improve the ferrying system's overall performance.
NASA Astrophysics Data System (ADS)
Crosta, Giovanni Franco; Pan, Yong-Le; Aptowicz, Kevin B.; Casati, Caterina; Pinnick, Ronald G.; Chang, Richard K.; Videen, Gorden W.
2013-12-01
Measurement of two-dimensional angle-resolved optical scattering (TAOS) patterns is an attractive technique for detecting and characterizing micron-sized airborne particles. In general, the interpretation of these patterns and the retrieval of the particle refractive index, shape or size alone, are difficult problems. By reformulating the problem in statistical learning terms, a solution is proposed herewith: rather than identifying airborne particles from their scattering patterns, TAOS patterns themselves are classified through a learning machine, where feature extraction interacts with multivariate statistical analysis. Feature extraction relies on spectrum enhancement, which includes the discrete cosine FOURIER transform and non-linear operations. Multivariate statistical analysis includes computation of the principal components and supervised training, based on the maximization of a suitable figure of merit. All algorithms have been combined together to analyze TAOS patterns, organize feature vectors, design classification experiments, carry out supervised training, assign unknown patterns to classes, and fuse information from different training and recognition experiments. The algorithms have been tested on a data set with more than 3000 TAOS patterns. The parameters that control the algorithms at different stages have been allowed to vary within suitable bounds and are optimized to some extent. Classification has been targeted at discriminating aerosolized Bacillus subtilis particles, a simulant of anthrax, from atmospheric aerosol particles and interfering particles, like diesel soot. By assuming that all training and recognition patterns come from the respective reference materials only, the most satisfactory classification result corresponds to 20% false negatives from B. subtilis particles and <11% false positives from all other aerosol particles. The most effective operations have consisted of thresholding TAOS patterns in order to reject defective ones, and forming training sets from three or four pattern classes. The presented automated classification method may be adapted into a real-time operation technique, capable of detecting and characterizing micron-sized airborne particles.
NASA Astrophysics Data System (ADS)
Yoo, Byungseok
2011-12-01
In almost all industries of mechanical, aerospace, and civil engineering fields, structural health monitoring (SHM) technology is essentially required for providing the reliable information of structural integrity of safety-critical structures, which can help reduce the risk of unexpected and sometimes catastrophic failures, and also offer cost-effective inspection and maintenance of the structures. State of the art SHM research on structural damage diagnosis is focused on developing global and real-time technologies to identify the existence, location, extent, and type of damage. In order to detect and monitor the structural damage in plate-like structures, SHM technology based on guided Lamb wave (GLW) interrogation is becoming more attractive due to its potential benefits such as large inspection area coverage in short time, simple inspection mechanism, and sensitivity to small damage. However, the GLW method has a few critical issues such as dispersion nature, mode conversion and separation, and multiple-mode existence. Phased array technique widely used in all aspects of civil, military, science, and medical industry fields may be employed to resolve the drawbacks of the GLW method. The GLW-based phased array approach is able to effectively examine and analyze complicated structural vibration responses in thin plate structures. Because the phased sensor array operates as a spatial filter for the GLW signals, the array signal processing method can enhance a desired signal component at a specific direction while eliminating other signal components from other directions. This dissertation presents the development, the experimental validation, and the damage detection applications of an innovative signal processing algorithm based on two-dimensional (2-D) spiral phased array in conjunction with the GLW interrogation technique. It starts with general backgrounds of SHM and the associated technology including the GLW interrogation method. Then, it is focused on the fundamentals of the GLW-based phased array approach and the development of an innovative signal processing algorithm associated with the 2-D spiral phased sensor array. The SHM approach based on array responses determined by the proposed phased array algorithm implementation is addressed. The experimental validation of the GLW-based 2-D spiral phased array technology and the associated damage detection applications to thin isotropic plate and anisotropic composite plate structures are presented.
Flat Surface Damage Detection System (FSDDS)
NASA Technical Reports Server (NTRS)
Williams, Martha; Lewis, Mark; Gibson, Tracy; Lane, John; Medelius, Pedro; Snyder, Sarah; Ciarlariello, Dan; Parks, Steve; Carrejo, Danny; Rojdev, Kristina
2013-01-01
The Flat Surface Damage Detection system (FSDDS} is a sensory system that is capable of detecting impact damages to surfaces utilizing a novel sensor system. This system will provide the ability to monitor the integrity of an inflatable habitat during in situ system health monitoring. The system consists of three main custom designed subsystems: the multi-layer sensing panel, the embedded monitoring system, and the graphical user interface (GUI). The GUI LABVIEW software uses a custom developed damage detection algorithm to determine the damage location based on the sequence of broken sensing lines. It estimates the damage size, the maximum depth, and plots the damage location on a graph. Successfully demonstrated as a stand alone technology during 2011 D-RATS. Software modification also allowed for communication with HDU avionics crew display which was demonstrated remotely (KSC to JSC} during 2012 integration testing. Integrated FSDDS system and stand alone multi-panel systems were demonstrated remotely and at JSC, Mission Operations Test using Space Network Research Federation (SNRF} network in 2012. FY13, FSDDS multi-panel integration with JSC and SNRF network Technology can allow for integration with other complementary damage detection systems.
Pamwani, Lavish; Habib, Anowarul; Melandsø, Frank; Ahluwalia, Balpreet Singh; Shelke, Amit
2018-06-22
The main aim of the paper is damage detection at the microscale in the anisotropic piezoelectric sensors using surface acoustic waves (SAWs). A novel technique based on the single input and multiple output of Rayleigh waves is proposed to detect the microscale cracks/flaws in the sensor. A convex-shaped interdigital transducer is fabricated for excitation of divergent SAWs in the sensor. An angularly shaped interdigital transducer (IDT) is fabricated at 0 degrees and ±20 degrees for sensing the convex shape evolution of SAWs. A precalibrated damage was introduced in the piezoelectric sensor material using a micro-indenter in the direction perpendicular to the pointing direction of the SAW. Damage detection algorithms based on empirical mode decomposition (EMD) and principal component analysis (PCA) are implemented to quantify the evolution of damage in piezoelectric sensor material. The evolution of the damage was quantified using a proposed condition indicator (CI) based on normalized Euclidean norm of the change in principal angles, corresponding to pristine and damaged states. The CI indicator provides a robust and accurate metric for detection and quantification of damage.
ParticleCall: A particle filter for base calling in next-generation sequencing systems
2012-01-01
Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067
Studies in the Theory of Quantum Games
NASA Astrophysics Data System (ADS)
Iqbal, Azhar
2005-03-01
Theory of quantum games is a new area of investigation that has gone through rapid development during the last few years. Initial motivation for playing games, in the quantum world, comes from the possibility of re-formulating quantum communication protocols, and algorithms, in terms of games between quantum and classical players. The possibility led to the view that quantum games have a potential to provide helpful insight into working of quantum algorithms, and even in finding new ones. This thesis analyzes and compares some interesting games when played classically and quantum mechanically. A large part of the thesis concerns investigations into a refinement notion of the Nash equilibrium concept. The refinement, called an evolutionarily stable strategy (ESS), was originally introduced in 1970s by mathematical biologists to model an evolving population using techniques borrowed from game theory. Analysis is developed around a situation when quantization changes ESSs without affecting corresponding Nash equilibria. Effects of quantization on solution-concepts other than Nash equilibrium are presented and discussed. For this purpose the notions of value of coalition, backwards-induction outcome, and subgame-perfect outcome are selected. Repeated games are known to have different information structure than one-shot games. Investigation is presented into a possible way where quantization changes the outcome of a repeated game. Lastly, two new suggestions are put forward to play quantum versions of classical matrix games. The first one uses the association of De Broglie's waves, with travelling material objects, as a resource for playing a quantum game. The second suggestion concerns an EPR type setting exploiting directly the correlations in Bell's inequalities to play a bi-matrix game.
Reformulation and solution of the master equation for multiple-well chemical reactions.
Georgievskii, Yuri; Miller, James A; Burke, Michael P; Klippenstein, Stephen J
2013-11-21
We consider an alternative formulation of the master equation for complex-forming chemical reactions with multiple wells and bimolecular products. Within this formulation the dynamical phase space consists of only the microscopic populations of the various isomers making up the reactive complex, while the bimolecular reactants and products are treated equally as sources and sinks. This reformulation yields compact expressions for the phenomenological rate coefficients describing all chemical processes, i.e., internal isomerization reactions, bimolecular-to-bimolecular reactions, isomer-to-bimolecular reactions, and bimolecular-to-isomer reactions. The applicability of the detailed balance condition is discussed and confirmed. We also consider the situation where some of the chemical eigenvalues approach the energy relaxation time scale and show how to modify the phenomenological rate coefficients so that they retain their validity.
Effects of Active Listening, Reformulation, and Imitation on Mediator Success: Preliminary Results.
Fischer-Lokou, Jacques; Lamy, Lubomir; Guéguen, Nicolas; Dubarry, Alexandre
2016-06-01
An experiment with 212 students (100 men, 112 women; M age = 18.3 years, SD = 0.9) was carried out to compare the effect of four techniques used by mediators on the number of agreements contracted by negotiators. Under experimental conditions, mediators were asked either to rephrase (reformulate) negotiators' words or to imitate them or to show active listening behavior, or finally, to use a free technique. More agreements were reached in the active listening condition than in both free and rephrase conditions. Furthermore, mediators in the active listening condition were perceived, by the negotiators, as more efficient than mediators using other techniques, although there was no significant difference observed between the active listening and imitation conditions. © The Author(s) 2016.
On the stability of equilibrium for a reformulated foreign trade model of three countries
NASA Astrophysics Data System (ADS)
Dassios, Ioannis K.; Kalogeropoulos, Grigoris
2014-06-01
In this paper, we study the stability of equilibrium for a foreign trade model consisting of three countries. As the gravity equation has been proven an excellent tool of analysis and adequately stable over time and space all over the world, we further enhance the problem to three masses. We use the basic Structure of Heckscher-Ohlin-Samuelson model. The national income equals consumption outlays plus investment plus exports minus imports. The proposed reformulation of the problem focus on two basic concepts: (1) the delay inherited in our economic variables and (2) the interaction effect along the three economies involved. Stability and stabilizability conditions are investigated while numerical examples provide further insight and better understanding. Finally, a generalization of the gravity equation is somehow obtained for the model.
Almíron-Roig, Eva; Monsivais, Pablo; Jebb, Susan A.; Benjamin Neelon, Sara E.; Griffin, Simon J.; Ogilvie, David B.
2015-01-01
We examined the impact of regulatory action to reduce levels of artificial trans–fatty acids (TFAs) in food. We searched Medline, Embase, ISI Web of Knowledge, and EconLit (January 1980 to December 2012) for studies related to government regulation of food- or diet-related health behaviors from which we extracted the subsample of legislative initiatives to reduce artificial TFAs in food. We screened 38 162 articles and identified 14 studies that examined artificial TFA controls limiting permitted levels or mandating labeling. These measures achieved good compliance, with evidence of appropriate reformulation. Regulations grounded on maximum limits and mandated labeling can lead to reductions in actual and reported TFAs in food and appear to encourage food producers to reformulate their products. PMID:25602897
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janes, N.; Ma, L.; Hsu, J.W.
1992-01-01
The Meyer-Overton hypothesis--that anesthesia arises from the nonspecific action of solutes on membrane lipids--is reformulated using colligative thermodynamics. Configurational entropy, the randomness imparted by the solute through the partitioning process, is implicated as the energetic driving force that pertubs cooperative membrane equilibria. A proton NMR partitioning approach based on the anesthetic benzyl alcohol is developed to assess the reformulation. Ring resonances from the partitioned drug are shielded by 0.2 ppm and resolved from the free, aqueous drug. Free alcohol is quantitated in dilute lipid dispersions using an acetate internal standard. Cooperative equilibria in model dipalmitoyl lecithin membranes are examined withmore » changes in temperature and alcohol concentration. The L[sub [beta][prime
Robust evaluation of time series classification algorithms for structural health monitoring
NASA Astrophysics Data System (ADS)
Harvey, Dustin Y.; Worden, Keith; Todd, Michael D.
2014-03-01
Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and mechanical infrastructure through analysis of structural response measurements. The supervised learning methodology for data-driven SHM involves computation of low-dimensional, damage-sensitive features from raw measurement data that are then used in conjunction with machine learning algorithms to detect, classify, and quantify damage states. However, these systems often suffer from performance degradation in real-world applications due to varying operational and environmental conditions. Probabilistic approaches to robust SHM system design suffer from incomplete knowledge of all conditions a system will experience over its lifetime. Info-gap decision theory enables nonprobabilistic evaluation of the robustness of competing models and systems in a variety of decision making applications. Previous work employed info-gap models to handle feature uncertainty when selecting various components of a supervised learning system, namely features from a pre-selected family and classifiers. In this work, the info-gap framework is extended to robust feature design and classifier selection for general time series classification through an efficient, interval arithmetic implementation of an info-gap data model. Experimental results are presented for a damage type classification problem on a ball bearing in a rotating machine. The info-gap framework in conjunction with an evolutionary feature design system allows for fully automated design of a time series classifier to meet performance requirements under maximum allowable uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xi, T; Jones, I M; Mohrenweiser, H W
2003-11-03
Over 520 different amino acid substitution variants have been previously identified in the systematic screening of 91 human DNA repair genes for sequence variation. Two algorithms were employed to predict the impact of these amino acid substitutions on protein activity. Sorting Intolerant From Tolerant (SIFT) classified 226 of 508 variants (44%) as ''Intolerant''. Polymorphism Phenotyping (PolyPhen) classed 165 of 489 amino acid substitutions (34%) as ''Probably or Possibly Damaging''. Another 9-15% of the variants were classed as ''Potentially Intolerant or Damaging''. The results from the two algorithms are highly associated, with concordance in predicted impact observed for {approx}62% of themore » variants. Twenty one to thirty one percent of the variant proteins are predicted to exhibit reduced activity by both algorithms. These variants occur at slightly lower individual allele frequency than do the variants classified as ''Tolerant'' or ''Benign''. Both algorithms correctly predicted the impact of 26 functionally characterized amino acid substitutions in the APE1 protein on biochemical activity, with one exception. It is concluded that a substantial fraction of the missense variants observed in the general human population are functionally relevant. These variants are expected to be the molecular genetic and biochemical basis for the associations of reduced DNA repair capacity phenotypes with elevated cancer risk.« less
Bladed wheels damage detection through Non-Harmonic Fourier Analysis improved algorithm
NASA Astrophysics Data System (ADS)
Neri, P.
2017-05-01
Recent papers introduced the Non-Harmonic Fourier Analysis for bladed wheels damage detection. This technique showed its potential in estimating the frequency of sinusoidal signals even when the acquisition time is short with respect to the vibration period, provided that some hypothesis are fulfilled. Anyway, previously proposed algorithms showed severe limitations in cracks detection at their early stage. The present paper proposes an improved algorithm which allows to detect a blade vibration frequency shift due to a crack whose size is really small compared to the blade width. Such a technique could be implemented for condition-based maintenance, allowing to use non-contact methods for vibration measurements. A stator-fixed laser sensor could monitor all the blades as they pass in front of the spot, giving precious information about the wheel health. This configuration determines an acquisition time for each blade which become shorter as the machine rotational speed increases. In this situation, traditional Discrete Fourier Transform analysis results in poor frequency resolution, being not suitable for small frequency shift detection. Non-Harmonic Fourier Analysis instead showed high reliability in vibration frequency estimation even with data samples collected in a short time range. A description of the improved algorithm is provided in the paper, along with a comparison with the previous one. Finally, a validation of the method is presented, based on finite element simulations results.
Systematic review of dietary salt reduction policies: Evidence for an effectiveness hierarchy?
Hyseni, Lirije; Elliot-Green, Alex; Lloyd-Williams, Ffion; Kypridemos, Chris; O’Flaherty, Martin; McGill, Rory; Orton, Lois; Bromley, Helen; Cappuccio, Francesco P.; Capewell, Simon
2017-01-01
Background Non-communicable disease (NCD) prevention strategies now prioritise four major risk factors: food, tobacco, alcohol and physical activity. Dietary salt intake remains much higher than recommended, increasing blood pressure, cardiovascular disease and stomach cancer. Substantial reductions in salt intake are therefore urgently needed. However, the debate continues about the most effective approaches. To inform future prevention programmes, we systematically reviewed the evidence on the effectiveness of possible salt reduction interventions. We further compared “downstream, agentic” approaches targeting individuals with “upstream, structural” policy-based population strategies. Methods We searched six electronic databases (CDSR, CRD, MEDLINE, SCI, SCOPUS and the Campbell Library) using a pre-piloted search strategy focussing on the effectiveness of population interventions to reduce salt intake. Retrieved papers were independently screened, appraised and graded for quality by two researchers. To facilitate comparisons between the interventions, the extracted data were categorised using nine stages along the agentic/structural continuum, from “downstream”: dietary counselling (for individuals, worksites or communities), through media campaigns, nutrition labelling, voluntary and mandatory reformulation, to the most “upstream” regulatory and fiscal interventions, and comprehensive strategies involving multiple components. Results After screening 2,526 candidate papers, 70 were included in this systematic review (49 empirical studies and 21 modelling studies). Some papers described several interventions. Quality was variable. Multi-component strategies involving both upstream and downstream interventions, generally achieved the biggest reductions in salt consumption across an entire population, most notably 4g/day in Finland and Japan, 3g/day in Turkey and 1.3g/day recently in the UK. Mandatory reformulation alone could achieve a reduction of approximately 1.45g/day (three separate studies), followed by voluntary reformulation (-0.8g/day), school interventions (-0.7g/day), short term dietary advice (-0.6g/day) and nutrition labelling (-0.4g/day), but each with a wide range. Tax and community based counselling could, each typically reduce salt intake by 0.3g/day, whilst even smaller population benefits were derived from health education media campaigns (-0.1g/day). Worksite interventions achieved an increase in intake (+0.5g/day), however, with a very wide range. Long term dietary advice could achieve a -2g/day reduction under optimal research trial conditions; however, smaller reductions might be anticipated in unselected individuals. Conclusions Comprehensive strategies involving multiple components (reformulation, food labelling and media campaigns) and “upstream” population-wide policies such as mandatory reformulation generally appear to achieve larger reductions in population-wide salt consumption than “downstream”, individually focussed interventions. This ‘effectiveness hierarchy’ might deserve greater emphasis in future NCD prevention strategies. PMID:28542317
Development of a Near Real-Time Hail Damage Swath Identification Algorithm for Vegetation
NASA Technical Reports Server (NTRS)
Bell, Jordan R.; Molthan, Andrew L.; Schultz, Kori A.; McGrath, Kevin M.; Burks, Jason E.
2015-01-01
Every year in the Midwest and Great Plains, widespread greenness forms in conjunction with the latter part of the spring-summer growing season. This prevalent greenness forms as a result of the high concentration of agricultural areas having their crops reach their maturity before the fall harvest. This time of year also coincides with an enhanced hail frequency for the Great Plains (Cintineo et al. 2012). These severe thunderstorms can bring damaging winds and large hail that can result in damage to the surface vegetation. The spatial extent of the damage can relatively small concentrated area or be a vast swath of damage that is visible from space. These large areas of damage have been well documented over the years. In the late 1960s aerial photography was used to evaluate crop damage caused by hail. As satellite remote sensing technology has evolved, the identification of these hail damage streaks has increased. Satellites have made it possible to view these streaks in additional spectrums. Parker et al. (2005) documented two streaks using the Moderate Resolution Imaging Spectroradiometer (MODIS) that occurred in South Dakota. He noted the potential impact that these streaks had on the surface temperature and associated surface fluxes that are impacted by a change in temperature. Gallo et al. (2012) examined at the correlation between radar signatures and ground observations from storms that produced a hail damage swath in Central Iowa also using MODIS. Finally, Molthan et al. (2013) identified hail damage streaks through MODIS, Landsat-7, and SPOT observations of different resolutions for the development of a potential near-real time applications. The manual analysis of hail damage streaks in satellite imagery is both tedious and time consuming, and may be inconsistent from event to event. This study focuses on development of an objective and automatic algorithm to detect these areas of damage in a more efficient and timely manner. This study utilizes the MODIS sensor aboard the NASA Aqua satellite. Aqua was chosen due to an afternoon orbit over the United States when land surface temperatures are relatively warm and improve the contrast between damaged and undamaged areas. This orbit is also similar to the orbit of the Suomi-National Polar-orbiting Partnership (NPP) satellite. The Suomi NPP satellite hosts the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument, which is the next generation of a MODIS-like sensor in polar orbit.
Decomposition Technique for Remaining Useful Life Prediction
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)
2014-01-01
The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.
NASA Astrophysics Data System (ADS)
Deng, Bin; Shen, ZhiBin; Duan, JingBo; Tang, GuoJin
2014-05-01
This paper studies the damage-viscoelastic behavior of composite solid propellants of solid rocket motors (SRM). Based on viscoelastic theories and strain equivalent hypothesis in damage mechanics, a three-dimensional (3-D) nonlinear viscoelastic constitutive model incorporating with damage is developed. The resulting viscoelastic constitutive equations are numerically discretized by integration algorithm, and a stress-updating method is presented by solving nonlinear equations according to the Newton-Raphson method. A material subroutine of stress-updating is made up and embedded into commercial code of Abaqus. The material subroutine is validated through typical examples. Our results indicate that the finite element results are in good agreement with the analytical ones and have high accuracy, and the suggested method and designed subroutine are efficient and can be further applied to damage-coupling structural analysis of practical SRM grain.
Classification-Based Spatial Error Concealment for Visual Communications
NASA Astrophysics Data System (ADS)
Chen, Meng; Zheng, Yefeng; Wu, Min
2006-12-01
In an error-prone transmission environment, error concealment is an effective technique to reconstruct the damaged visual content. Due to large variations of image characteristics, different concealment approaches are necessary to accommodate the different nature of the lost image content. In this paper, we address this issue and propose using classification to integrate the state-of-the-art error concealment techniques. The proposed approach takes advantage of multiple concealment algorithms and adaptively selects the suitable algorithm for each damaged image area. With growing awareness that the design of sender and receiver systems should be jointly considered for efficient and reliable multimedia communications, we proposed a set of classification-based block concealment schemes, including receiver-side classification, sender-side attachment, and sender-side embedding. Our experimental results provide extensive performance comparisons and demonstrate that the proposed classification-based error concealment approaches outperform the conventional approaches.
Health-aware Model Predictive Control of Pasteurization Plant
NASA Astrophysics Data System (ADS)
Karimi Pour, Fatemeh; Puig, Vicenç; Ocampo-Martinez, Carlos
2017-01-01
In order to optimize the trade-off between components life and energy consumption, the integration of a system health management and control modules is required. This paper proposes the integration of model predictive control (MPC) with a fatigue estimation approach that minimizes the damage of the components of a pasteurization plant. The fatigue estimation is assessed with the rainflow counting algorithm. Using data from this algorithm, a simplified model that characterizes the health of the system is developed and integrated with MPC. The MPC controller objective is modified by adding an extra criterion that takes into account the accumulated damage. But, a steady-state offset is created by adding this extra criterion. Finally, by including an integral action in the MPC controller, the steady-state error for regulation purpose is eliminated. The proposed control scheme is validated in simulation using a simulator of a utility-scale pasteurization plant.
Flies compensate for unilateral wing damage through modular adjustments of wing and body kinematics
Iwasaki, Nicole A.; Elzinga, Michael J.; Melis, Johan M.; Dickinson, Michael H.
2017-01-01
Using high-speed videography, we investigated how fruit flies compensate for unilateral wing damage, in which loss of area on one wing compromises both weight support and roll torque equilibrium. Our results show that flies control for unilateral damage by rolling their body towards the damaged wing and by adjusting the kinematics of both the intact and damaged wings. To compensate for the reduction in vertical lift force due to damage, flies elevate wingbeat frequency. Because this rise in frequency increases the flapping velocity of both wings, it has the undesired consequence of further increasing roll torque. To compensate for this effect, flies increase the stroke amplitude and advance the timing of pronation and supination of the damaged wing, while making the opposite adjustments on the intact wing. The resulting increase in force on the damaged wing and decrease in force on the intact wing function to maintain zero net roll torque. However, the bilaterally asymmetrical pattern of wing motion generates a finite lateral force, which flies balance by maintaining a constant body roll angle. Based on these results and additional experiments using a dynamically scaled robotic fly, we propose a simple bioinspired control algorithm for asymmetric wing damage. PMID:28163885
Flies compensate for unilateral wing damage through modular adjustments of wing and body kinematics.
Muijres, Florian T; Iwasaki, Nicole A; Elzinga, Michael J; Melis, Johan M; Dickinson, Michael H
2017-02-06
Using high-speed videography, we investigated how fruit flies compensate for unilateral wing damage, in which loss of area on one wing compromises both weight support and roll torque equilibrium. Our results show that flies control for unilateral damage by rolling their body towards the damaged wing and by adjusting the kinematics of both the intact and damaged wings. To compensate for the reduction in vertical lift force due to damage, flies elevate wingbeat frequency. Because this rise in frequency increases the flapping velocity of both wings, it has the undesired consequence of further increasing roll torque. To compensate for this effect, flies increase the stroke amplitude and advance the timing of pronation and supination of the damaged wing, while making the opposite adjustments on the intact wing. The resulting increase in force on the damaged wing and decrease in force on the intact wing function to maintain zero net roll torque. However, the bilaterally asymmetrical pattern of wing motion generates a finite lateral force, which flies balance by maintaining a constant body roll angle. Based on these results and additional experiments using a dynamically scaled robotic fly, we propose a simple bioinspired control algorithm for asymmetric wing damage.
NASA Astrophysics Data System (ADS)
Dushyanth, N. D.; Suma, M. N.; Latte, Mrityanjaya V.
2016-03-01
Damage in the structure may raise a significant amount of maintenance cost and serious safety problems. Hence detection of the damage at its early stage is of prime importance. The main contribution pursued in this investigation is to propose a generic optimal methodology to improve the accuracy of positioning of the flaw in a structure. This novel approach involves a two-step process. The first step essentially aims at extracting the damage-sensitive features from the received signal, and these extracted features are often termed the damage index or damage indices, serving as an indicator to know whether the damage is present or not. In particular, a multilevel SVM (support vector machine) plays a vital role in the distinction of faulty and healthy structures. Formerly, when a structure is unveiled as a damaged structure, in the subsequent step, the position of the damage is identified using Hilbert-Huang transform. The proposed algorithm has been evaluated in both simulation and experimental tests on a 6061 aluminum plate with dimensions 300 mm × 300 mm × 5 mm which accordingly yield considerable improvement in the accuracy of estimating the position of the flaw.
SIMPLE GREEN® (Dual listing for 2013 reformulation)
Technical product bulletin: this surface washing agent is suitable for use in oil spill cleanups in freshwater, estuarine, and marine environments at all temperatures, on both porous and non-porous surfaces.
40 CFR 80.71 - Descriptions of VOC-control regions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Jersey New York North Dakota Ohio Pennsylvania Rhode Island South Dakota Vermont Washington West Virginia... Oklahoma Oregon South Carolina Tennessee Texas Utah Virginia (b) Reformulated gasoline covered areas which...
Stochastic damage evolution in textile laminates
NASA Technical Reports Server (NTRS)
Dzenis, Yuris A.; Bogdanovich, Alexander E.; Pastore, Christopher M.
1993-01-01
A probabilistic model utilizing random material characteristics to predict damage evolution in textile laminates is presented. Model is based on a division of each ply into two sublaminas consisting of cells. The probability of cell failure is calculated using stochastic function theory and maximal strain failure criterion. Three modes of failure, i.e. fiber breakage, matrix failure in transverse direction, as well as matrix or interface shear cracking, are taken into account. Computed failure probabilities are utilized in reducing cell stiffness based on the mesovolume concept. A numerical algorithm is developed predicting the damage evolution and deformation history of textile laminates. Effect of scatter of fiber orientation on cell properties is discussed. Weave influence on damage accumulation is illustrated with the help of an example of a Kevlar/epoxy laminate.
Knowledge of damage identification about tensegrities via flexibility disassembly
NASA Astrophysics Data System (ADS)
Jiang, Ge; Feng, Xiaodong; Du, Shigui
2017-12-01
Tensegrity structures composing of continuous cables and discrete struts are under tension and compression, respectively. In order to determine the damage extents of tensegrity structures, a new method for tensegrity structural damage identification is presented based on flexibility disassembly. To decompose a tensegrity structural flexibility matrix into the matrix represention of the connectivity between degress-of-freedoms and the diagonal matrix comprising of magnitude informations. Step 1: Calculate perturbation flexibility; Step 2: Compute the flexibility connectivity matrix and perturbation flexibility parameters; Step 3: Calculate the perturbation stiffness parameters. The efficiency of the proposed method is demonstrated by a numeical example comprising of 12 cables and 4 struts with pretensioned. Accurate identification of local damage depends on the availability of good measured data, an accurate and reasonable algorithm.