Bell-Curve Based Evolutionary Optimization Algorithm
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.
1998-01-01
The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.
Bell-Curve Based Evolutionary Strategies for Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
2001-01-01
Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity. However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold. One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumbersome binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back and Dasgupta and Michalesicz. We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.
Bell-Curve Based Evolutionary Strategies for Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
2000-01-01
Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity (Reeves 1997). However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold (Glover 1998). One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumber-some binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back (1996) and Dasgupta and Michalesicz (1997). We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.
A Bell-Curved Based Algorithm for Mixed Continuous and Discrete Structural Optimization
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.; Weber, Michael; Sobieszczanski-Sobieski, Jaroslaw
2001-01-01
An evolutionary based strategy utilizing two normal distributions to generate children is developed to solve mixed integer nonlinear programming problems. This Bell-Curve Based (BCB) evolutionary algorithm is similar in spirit to (mu + mu) evolutionary strategies and evolutionary programs but with fewer parameters to adjust and no mechanism for self adaptation. First, a new version of BCB to solve purely discrete optimization problems is described and its performance tested against a tabu search code for an actuator placement problem. Next, the performance of a combined version of discrete and continuous BCB is tested on 2-dimensional shape problems and on a minimum weight hub design problem. In the latter case the discrete portion is the choice of the underlying beam shape (I, triangular, circular, rectangular, or U).
How Skewed Is "The Bell Curve"?
ERIC Educational Resources Information Center
Haynes, Norris
1995-01-01
Raises issues for consideration in responding to the genetically based differences in intelligence suggested by "The Bell Curve." The author articulates several theories of intelligence supporting the environmental (nurturing) paradigm and argues why labeling and categorizing according to IQ scores is professionally unethical and…
Classifying 50 years of Bell inequalities
NASA Astrophysics Data System (ADS)
Rosset, Denis; Bancal, Jean-Daniel; Gisin, Nicolas
2014-10-01
Since John S Bell demonstrated the interest of studying linear combinations of probabilities in relation with the EPR paradox in 1964, Bell inequalities have lead to numerous developments. Unfortunately, the description of Bell inequalities is subject to several degeneracies, which make any exchange of information about them unnecessarily hard. Here, we analyze these degeneracies and propose a decomposition for Bell-like inequalities based on a set of reference expressions which is not affected by them. These reference expressions set a common ground for comparing Bell inequalities. We provide algorithms based on finite group theory to compute this decomposition. Implementing these algorithms allows us to set up a compendium of reference Bell-like inequalities, available online at www.faacets.com. This website constitutes a platform where registered Bell-like inequalities can be explored, new inequalities can be compared to previously-known ones and relevant information on Bell inequalities can be added in a collaborative manner. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘50 years of Bell’s theorem’.
The Bell Curve Wars. Race, Intelligence, and the Future of America.
ERIC Educational Resources Information Center
Fraser, Steven, Ed.
"The Bell Curve" by Richard J. Herrnstein and Charles Murray has generated enormous debate as a result of its claim that there is a connection between race and intelligence. The essays of this collection respond to "The Bell Curve" in various ways. Taken together, the following offer an antidote to a work of dubious premises…
Ringin' the water bell: dynamic modes of curved fluid sheets
NASA Astrophysics Data System (ADS)
Kolinski, John; Aharoni, Hillel; Fineberg, Jay; Sharon, Eran
2015-11-01
A water bell is formed by fluid flowing in a thin, coherent sheet in the shape of a bell. Experimentally, a water bell is created via the impact of a cylindrical jet on a flat surface. Its shape is set by the splash angle (the separation angle) of the resulting cylindrically symmetric water sheet. The separation angle is altered by adjusting the height of a lip surrounding the impact point, as in a water sprinkler. We drive the lip's height sinusoidally, altering the separation angle, and ringin' the water bell. This forcing generates disturbances on the steady-state water bell that propagate forward and backward in the fluid's reference frame at well-defined velocities, and interact, resulting in the emergence of an interference pattern unique to each steady-state geometry. We analytically model these dynamics by linearizing the amplitude of the bell's response about the underlying curved geometry. This simple model predicts the nodal structure over a wide range of steady-state water bell configurations and driving frequencies. Due to the curved water bell geometry, the nodal structure is quite complex; nevertheless, the predicted nodal structure agrees extremely well with the experimental data. When we drive the bell beyond perturbative separation angles, the nodal locations surprisingly persist, despite the strikingly altered underlying water bell shape. At extreme driving amplitudes the water sheet assumes a rich variety of tortuous, non-convex shapes; nevertheless, the fluid sheet remains intact.
It's time to move on from the bell curve.
Robinson, Lawrence R
2017-11-01
The bell curve was first described in the 18th century by de Moivre and Gauss to depict the distribution of binomial events, such as coin tossing, or repeated measures of physical objects. In the 19th and 20th centuries, the bell curve was appropriated, or perhaps misappropriated, to apply to biologic and social measures across people. For many years we used it to derive reference values for our electrophysiologic studies. There is, however, no reason to believe that electrophysiologic measures should approximate a bell-curve distribution, and empiric evidence suggests they do not. The concept of using mean ± 2 standard deviations should be abandoned. Reference values are best derived by using non-parametric analyses, such as percentile values. This proposal aligns with the recommendation of the recent normative data task force of the American Association of Neuromuscular & Electrodiagnostic Medicine and follows sound statistical principles. Muscle Nerve 56: 859-860, 2017. © 2017 Wiley Periodicals, Inc.
On the dynamics of StemBells: Microbubble-conjugated stem cells for ultrasound-controlled delivery
NASA Astrophysics Data System (ADS)
Kokhuis, Tom J. A.; Naaijkens, Benno A.; Juffermans, Lynda J. M.; Kamp, Otto; van der Steen, Antonius F. W.; Versluis, Michel; de Jong, Nico
2017-07-01
The use of stem cells for regenerative tissue repair is promising but hampered by the low number of cells delivered to the site of injury. To increase the delivery, we propose a technique in which stem cells are linked to functionalized microbubbles, creating echogenic complex dubbed StemBells. StemBells are highly susceptible to acoustic radiation force which can be employed after injection to push the StemBells locally to the treatment site. To optimally benefit from the delivery technique, a thorough characterization of the dynamics of StemBells during ultrasound exposure is needed. Using high-speed optical imaging, we study the dynamics of StemBells as a function of the applied frequency from which resonance curves were constructed. A theoretical model, based on a modified Rayleigh-Plesset type equation, captured the experimental resonance characteristics and radial dynamics in detail.
"The Bell Curve": Getting the Facts Straight.
ERIC Educational Resources Information Center
Feuerstein, Reuven; Kozulin, Alex
1995-01-01
Despite its failings, Herrnstein and Murray's "The Bell Curve" is valuable for emphasizing cognition as significantly affecting human performance and social achievement; acknowledging human differences; and offering a frightening depiction of contemporary American society. The authors err in reducing intelligence to a stable, immutable…
"The Bell Curve" and Carrie Buck: Eugenics Revisited.
ERIC Educational Resources Information Center
Smith, J. David
1995-01-01
The 1994 publication of "The Bell Curve" by R. Herrnstein and C. Murray is compared to other examples of eugenic principles, including the sterilization of "feebleminded" Carrie Buck, family degeneracy studies focusing on lower class Caucasian families, and other works that view the poorest and least educated members of society…
For Whom the Bell Curves: IQ Scores in Historical Perspective.
ERIC Educational Resources Information Center
Patterson, Orlando
1995-01-01
That there is a significant degree of observable average difference in the intelligence quotients of blacks and whites is an established fact. The explanations for this offered by Herrnstein and Murray ("The Bell Curve," 1994) ignore the equally well-established facts of discrimination and disadvantage over centuries. (SLD)
The "Bell Curve": For Whom It Tolls.
ERIC Educational Resources Information Center
Molnar, Alex
1995-01-01
According to Herrnstein and Murray's "The Bell Curve" (1994), public education cannot alter the economic, social, or political stratification of American society. Intelligence is supposedly being combined and concentrated, and there is no inexpensive, reliable method to raise IQ. Actually, the book justifies the economic status quo and a…
"The Bell Curve": Review of Reviews.
ERIC Educational Resources Information Center
Parker, Franklin; Parker, Betty J.
This paper reviews the book "The Bell Curve" by Harvard psychologist Richard J. Herrnstein and political scientist Charles Alan Murray. The paper asserts as the book's main points and implications: (1) one's socioeconomic place in life is now determined by IQ rather than family wealth and influence; (2) ruling white elites, who have…
Status of the calibration and alignment framework at the Belle II experiment
NASA Astrophysics Data System (ADS)
Dossett, D.; Sevior, M.; Ritter, M.; Kuhr, T.; Bilka, T.; Yaschenko, S.;
2017-10-01
The Belle II detector at the Super KEKB e+e-collider plans to take first collision data in 2018. The monetary and CPU time costs associated with storing and processing the data mean that it is crucial for the detector components at Belle II to be calibrated quickly and accurately. A fast and accurate calibration system would allow the high level trigger to increase the efficiency of event selection, and can give users analysis-quality reconstruction promptly. A flexible framework to automate the fast production of calibration constants is being developed in the Belle II Analysis Software Framework (basf2). Detector experts only need to create two components from C++ base classes in order to use the automation system. The first collects data from Belle II event data files and outputs much smaller files to pass to the second component. This runs the main calibration algorithm to produce calibration constants ready for upload into the conditions database. A Python framework coordinates the input files, order of processing, and submission of jobs. Splitting the operation into collection and algorithm processing stages allows the framework to optionally parallelize the collection stage on a batch system.
NASA Astrophysics Data System (ADS)
Kang, Yi-Hao; Chen, Ye-Hong; Shi, Zhi-Cheng; Huang, Bi-Hua; Song, Jie; Xia, Yan
2017-08-01
We propose a protocol for complete Bell-state analysis for two superconducting-quantum-interference-device qubits. The Bell-state analysis could be completed by using a sequence of microwave pulses designed by the transitionless tracking algorithm, which is a useful method in the technique of shortcut to adiabaticity. After the whole process, the information for distinguishing four Bell states will be encoded on two auxiliary qubits, while the Bell states remain unchanged. One can read out the information by detecting the auxiliary qubits. Thus the Bell-state analysis is nondestructive. The numerical simulations show that the protocol possesses a high success probability of distinguishing each Bell state with current experimental technology even when decoherence is taken into account. Thus, the protocol may have potential applications for the information readout in quantum communications and quantum computations in superconducting quantum networks.
Inequality by Design: Cracking the Bell Curve Myth [book review].
ERIC Educational Resources Information Center
Carroll, John B.
2002-01-01
This book, a critique of "The Bell Curve" by R. Herrnstein and C. Murray, explores what "inequality" in society means, how it arises, and how it can be measured or dealt with quantitatively. It also considers how societal and other variables work to increase or decrease inequality. The book argues that "The Bell…
For Whom the Bell Curves: Old Texts, Mental Retardation, and the Persistent Argument.
ERIC Educational Resources Information Center
Smith, J. David
1995-01-01
A review of secondary education and college biology textbooks published from 1900 through 1950 finds strong support for eugenics and Social Darwinism. These attitudes are related to effects of such recent books as "The Bell Curve" (by R. Herrnstein and C. Murray) for people with mental retardation. (DB)
Policy Alternatives for Post-Industrial America Suggested in the "Bell Curve": The Untold Story.
ERIC Educational Resources Information Center
Bauer, Norman J.
The primary problem that Richard J. Herrnstein and Charles Murray address in their book, "The Bell Curve," is that an unrecognized societal migration has been emerging in American society since 1950. People with high IQs are rewarded socially and economically, while the rest of the population has remained stagnant. This paper describes…
"The Bell Curve": Another Chapter in the Continuing Political Economy of Racism.
ERIC Educational Resources Information Center
Newby, Robert G.; Newby, Diane E.
1995-01-01
Criticizes Charles Murray's "The Bell Curve" and attempts a more cogent analysis of the respective roles of blacks and the U.S. political economy. Utilizes a sociology of knowledge framework to discuss the evolving nature of blacks in the nation's workforce. Briefly discusses eugenics and the history of racist social theories. (MJP)
ERIC Educational Resources Information Center
Belke, Terry W.
1995-01-01
Neutral summary of "The Bell Curve" (Herrnstein and Murray) by a former student of Herrnstein. Focuses on the emergence of a cognitive elite in the United States; relationships between IQ and poverty, educational attainment, unemployment, divorce, illegitimacy, welfare dependency, parenting competence, criminal behaviors, and voting;…
Quantum Public Key Cryptosystem Based on Bell States
NASA Astrophysics Data System (ADS)
Wu, WanQing; Cai, QingYu; Zhang, HuanGuo; Liang, XiaoYan
2017-11-01
Classical public key cryptosystems ( P K C), such as R S A, E I G a m a l, E C C, are no longer secure in quantum algorithms, and quantum cryptography has become a novel research topic. In this paper we present a quantum asymmetrical cryptosystem i.e. quantum public key cryptosystem ( Q P K C) based on the Bell states. In particular, in the proposed QPKC the public key are given by the first n particles of Bell states and generalized Pauli operations. The corresponding secret key are the last n particles of Bell states and the inverse of generalized Pauli operations. The proposed QPKC encrypts the message using a public key and decrypts the ciphertext using a private key. By H o l e v o ' s theorem, we proved the security of the secret key and messages during the QPKC.
Eugenics Past and Present: Remembering Buck v. Bell.
ERIC Educational Resources Information Center
Berson, Michael J.; Cruz, Barbara
2001-01-01
Provides background information about the eugenics movement. Focuses on eugenics in the United States detailing the case, Buck v. Bell, and eugenics in Germany. Explores the present eugenic movement, focusing on "The Bell Curve," China's one child policy, and the use of eugenic sterilizations in the United States and Canada. Includes…
"The Bell Curve": Does IQ and Race Determine Class and Place in America?
ERIC Educational Resources Information Center
Parker, Franklin
"The Bell Curve" by Richard J. Herrnstein and Charles A. Murray has ignited a fierce academic debate. They assert that IQ as measured by tests has replaced family wealth and status in determining jobs, income, class, and place in American life; that whites average 15 IQ points higher than blacks; and that high-IQ ruling elites, with…
Everything You Thought Was True about IQ Testing, but Isn't: A Reaction to "The Bell Curve."
ERIC Educational Resources Information Center
Dent, Harold E.
Rather than focus on the numerous flaws in the book "The Bell Curve" (Herrnstein & Murray), this discussion focuses on the racism and bigoted beliefs of the pioneers in the mental measurement movement in the United States--beliefs which provided the background and opportunity for the publication of the book. A significant amount of…
The Bell Curve and the Future of Literacy: If This Is the Answer, What Is the Question?
ERIC Educational Resources Information Center
Wahlstrom, Ralph
This paper takes the position that the main premise of "The Bell Curve" (by Richard Murray and Charles Herrnstein) is that significant advantages exist for possessing an IQ toward the top of the range--advantages that pertain to school success, career, and income. The premise is that people with high IQs have naturally acquired scholarly…
Detection of Naja atra Cardiotoxin Using Adenosine-Based Molecular Beacon.
Shi, Yi-Jun; Chen, Ying-Jung; Hu, Wan-Ping; Chang, Long-Sen
2017-01-07
This study presents an adenosine (A)-based molecular beacon (MB) for selective detection of Naja atra cardiotoxin (CTX) that functions by utilizing the competitive binding between CTX and the poly(A) stem of MB to coralyne. The 5'- and 3'-end of MB were labeled with a reporter fluorophore and a non-fluorescent quencher, respectively. Coralyne induced formation of the stem-loop MB structure through A₂-coralyne-A₂ coordination, causing fluorescence signal turn-off due to fluorescence resonance energy transfer between the fluorophore and quencher. CTX3 could bind to coralyne. Moreover, CTX3 alone induced the folding of MB structure and quenching of MB fluorescence. Unlike that of snake venom α-neurotoxins, the fluorescence signal of coralyne-MB complexes produced a bell-shaped concentration-dependent curve in the presence of CTX3 and CTX isotoxins; a turn-on fluorescence signal was noted when CTX concentration was ≤80 nM, while a turn-off fluorescence signal was noted with a further increase in toxin concentrations. The fluorescence signal of coralyne-MB complexes yielded a bell-shaped curve in response to varying concentrations of N. atra crude venom but not those of Bungarus multicinctus and Protobothrops mucrosquamatus venoms. Moreover, N. nigricollis venom also functioned as N. atra venom to yield a bell-shaped concentration-dependent curve of MB fluorescence signal, again supporting that the hairpin-shaped MB could detect crude venoms containing CTXs. Taken together, our data validate that a platform composed of coralyne-induced stem-loop MB structure selectively detects CTXs.
2014-01-01
Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860
NASA Astrophysics Data System (ADS)
Braun, N.; Hauth, T.; Pulvermacher, C.; Ritter, M.
2017-10-01
Today’s analyses for high-energy physics (HEP) experiments involve processing a large amount of data with highly specialized algorithms. The contemporary workflow from recorded data to final results is based on the execution of small scripts - often written in Python or ROOT macros which call complex compiled algorithms in the background - to perform fitting procedures and generate plots. During recent years interactive programming environments, such as Jupyter, became popular. Jupyter allows to develop Python-based applications, so-called notebooks, which bundle code, documentation and results, e.g. plots. Advantages over classical script-based approaches is the feature to recompute only parts of the analysis code, which allows for fast and iterative development, and a web-based user frontend, which can be hosted centrally and only requires a browser on the user side. In our novel approach, Python and Jupyter are tightly integrated into the Belle II Analysis Software Framework (basf2), currently being developed for the Belle II experiment in Japan. This allows to develop code in Jupyter notebooks for every aspect of the event simulation, reconstruction and analysis chain. These interactive notebooks can be hosted as a centralized web service via jupyterhub with docker and used by all scientists of the Belle II Collaboration. Because of its generality and encapsulation, the setup can easily be scaled to large installations.
Bell-Curve Genetic Algorithm for Mixed Continuous and Discrete Optimization Problems
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.; Griffith, Michelle; Sykes, Ruth; Sobieszczanski-Sobieski, Jaroslaw
2002-01-01
In this manuscript we have examined an extension of BCB that encompasses a mix of continuous and quasi-discrete, as well as truly-discrete applications. FVe began by testing two refinements to the discrete version of BCB. The testing of midpoint versus fitness (Tables 1 and 2) proved inconclusive. The testing of discrete normal tails versus standard mutation showed was conclusive and demonstrated that the discrete normal tails are better. Next, we implemented these refinements in a combined continuous and discrete BCB and compared the performance of two discrete distance on the hub problem. Here we found when "order does matter" it pays to take it into account.
Byun, Hayoung; Cho, Yang-Sun; Jang, Jeon Yeob; Chung, Kyu Whan; Hwang, Soojin; Chung, Won-Ho; Hong, Sung Hwa
2013-10-01
To evaluate the prognostic and predictive value of electroneuronography (ENoG) in acute severe inflammatory facial paralysis, including Bell's palsy and Ramsay Hunt syndrome (RHS). Prospective observational study. Patients with acute severe facial paralysis of House-Brackmann (H-B) grade IV or worse and diagnosed with Bell's palsy or RHS were enrolled from August 2007 to July 2011. After treatment with oral corticosteroid, antiviral agent, and protective eye care, patients were followed up until recovery or 12 months from onset. Sixty-six patients with Bell's palsy and 22 with RHS were included. Multiple logistic regression analysis showed significant effect of ENoG value on recovery in both Bell's palsy and RHS. Values of ENoG were significantly worse in RHS than Bell's palsy. Chance of early recovery within 6 weeks after correction of ENoG effect was still significantly worse in RHS. Logistic regression analysis showed 90% chance of recovery within 6 months, expected with ENoG values of 69.2% degeneration (Bell's palsy) and 59.3% (RHS). The receiver operating characteristics (ROC) curves showed ENoG values of 82.5% (Bell's palsy) and 78.0% (RHS) as a critical cutoff value of nonrecovery until 1 year, with the best sensitivity and specificity. A higher chance of recovery was expected with better ENoG in Bell's palsy and RHS. Based on our data, nonrecovery is predicted in patients with ENoG value greater than 82.5% in Bell's palsy, and 78% in RHS. 4. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
Explaining quantum correlations through evolution of causal models
NASA Astrophysics Data System (ADS)
Harper, Robin; Chapman, Robert J.; Ferrie, Christopher; Granade, Christopher; Kueng, Richard; Naoumenko, Daniel; Flammia, Steven T.; Peruzzo, Alberto
2017-04-01
We propose a framework for the systematic and quantitative generalization of Bell's theorem using causal networks. We first consider the multiobjective optimization problem of matching observed data while minimizing the causal effect of nonlocal variables and prove an inequality for the optimal region that both strengthens and generalizes Bell's theorem. To solve the optimization problem (rather than simply bound it), we develop a genetic algorithm treating as individuals causal networks. By applying our algorithm to a photonic Bell experiment, we demonstrate the trade-off between the quantitative relaxation of one or more local causality assumptions and the ability of data to match quantum correlations.
Self-guided method to search maximal Bell violations for unknown quantum states
NASA Astrophysics Data System (ADS)
Yang, Li-Kai; Chen, Geng; Zhang, Wen-Hao; Peng, Xing-Xiang; Yu, Shang; Ye, Xiang-Jun; Li, Chuan-Feng; Guo, Guang-Can
2017-11-01
In recent decades, a great variety of research and applications concerning Bell nonlocality have been developed with the advent of quantum information science. Providing that Bell nonlocality can be revealed by the violation of a family of Bell inequalities, finding maximal Bell violation (MBV) for unknown quantum states becomes an important and inevitable task during Bell experiments. In this paper we introduce a self-guided method to find MBVs for unknown states using a stochastic gradient ascent algorithm (SGA), by parametrizing the corresponding Bell operators. For three investigated systems (two qubit, three qubit, and two qutrit), this method can ascertain the MBV of general two-setting inequalities within 100 iterations. Furthermore, we prove SGA is also feasible when facing more complex Bell scenarios, e.g., d -setting d -outcome Bell inequality. Moreover, compared to other possible methods, SGA exhibits significant superiority in efficiency, robustness, and versatility.
Exercise, oxidants, and antioxidants change the shape of the bell-shaped hormesis curve.
Radak, Zsolt; Ishihara, Kazunari; Tekus, Eva; Varga, Csaba; Posa, Aniko; Balogh, Laszlo; Boldogh, Istvan; Koltai, Erika
2017-08-01
It is debated whether exercise-induced ROS production is obligatory to cause adaptive response. It is also claimed that antioxidant treatment could eliminate the adaptive response, which appears to be systemic and reportedly reduces the incidence of a wide range of diseases. Here we suggest that if the antioxidant treatment occurs before the physiological function-ROS dose-response curve reaches peak level, the antioxidants can attenuate function. On the other hand, if the antioxidant treatment takes place after the summit of the bell-shaped dose response curve, antioxidant treatment would have beneficial effects on function. We suggest that the effects of antioxidant treatment are dependent on the intensity of exercise, since the adaptive response, which is multi pathway dependent, is strongly influenced by exercise intensity. It is further suggested that levels of ROS concentration are associated with peak physiological function and can be extended by physical fitness level and this could be the basis for exercise pre-conditioning. Physical inactivity, aging or pathological disorders increase the sensitivity to oxidative stress by altering the bell-shaped dose response curve. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A
2014-09-22
We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.
Prognostic factors of Bell's palsy: prospective patient collected observational study.
Fujiwara, Takashi; Hato, Naohito; Gyo, Kiyofumi; Yanagihara, Naoaki
2014-07-01
The purpose of this study was to evaluate various parameters potentially influencing poor prognosis in Bell's palsy and to assess the predictive value for Bell's palsy. A single-center prospective patient collected observation and validation study was conducted. To evaluate the correlation between patient characteristics and poor prognosis, we performed univariate and multivariate analyzes of age, gender, side of palsy, diabetes mellitus, hypertension, and facial grading score 1 week after onset. To evaluate the accuracy of the facial grading score, we prepared a receiver operating characteristic (ROC) curve and calculated the area under the ROC curve (AUROC). We also calculated sensitivity, specificity, positive/negative likelihood ratio, and positive/negative predictive value. We included Bell's palsy patients who attended Ehime University Hospital within 1 week after onset between 1977 and 2011. We excluded patients who were less than 15 years old and lost-to-follow-up within 6 months. The main outcome was defined as non-recovery at 6 months after onset. In total, 679 adults with Bell's palsy were included. The facial grading score at 1 week showed a correlation with non-recovery in the multivariate analysis, although age, gender, side of palsy, diabetes mellitus, and hypertension did not. The AUROC of the facial grading score was 0.793. The Y-system score at 1 week moderate accurately predicted non-recovery at 6 months in Bell's palsy.
1991-05-01
Greenberg and B. Lubachevsky (AT& T, Bell Laboratories). We have developed algorithms suitable for simulating a general class of stack replacement policy al...systems of conservation laws. Finally, we began to study various edge detectors based on the (truncated) Hilbert trans- form, in the context of spectral
Editorial: The Advent of a Molecular Genetics of General Intelligence.
ERIC Educational Resources Information Center
Weiss, Volkmar
1995-01-01
Raw IQ scores do not demonstrate the bell curve created by normalized scores, even the bell-shaped distribution does not require large numbers of underlying genes. Family data support a major gene locus of IQ. The correlation between glutathione peroxidase and IQ should be investigated through molecular genetics. (SLD)
On the no-signaling approach to quantum nonlocality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Méndez, J. M., E-mail: manolo@ifisica.uaslp.mx; Urías, Jesús, E-mail: jurias@ifisica.uaslp.mx
2015-03-15
The no-signaling approach to nonlocality deals with separable and inseparable multiparty correlations in the same set of probability states without conflicting causality. The set of half-spaces describing the polytope of no-signaling probability states that are admitted by the most general class of Bell scenarios is formulated in full detail. An algorithm for determining the skeleton that solves the no-signaling description is developed upon a new strategy that is partially pivoting and partially incremental. The algorithm is formulated rigorously and its implementation is shown to be effective to deal with the highly degenerate no-signaling descriptions. Several applications of the algorithm asmore » a tool for the study of quantum nonlocality are mentioned. Applied to a large set of bipartite Bell scenarios, we found that the corresponding no-signaling polytopes have a striking high degeneracy that grows up exponentially with the size of the Bell scenario.« less
Chang, Shang-Jen; Yang, Stephen S D
2008-12-01
To evaluate the inter-observer and intra-observer agreement on the interpretation of uroflowmetry curves of children. Healthy kindergarten children were enrolled for evaluation of uroflowmetry. Uroflowmetry curves were classified as bell-shaped, tower, plateau, staccato and interrupted. Only the bell-shaped curves were regarded as normal. Two urodynamists evaluated the curves independently after reviewing the definitions of the different types of uroflowmetry curve. The senior urodynamist evaluated the curves twice 3 months apart. The final conclusion was made when consensus was reached. Agreement among observers was analyzed using kappa statistics. Of 190 uroflowmetry curves eligible for analysis, the intra-observer agreement in interpreting each type of curve and interpreting normalcy vs abnormality was good (kappa=0.71 and 0.68, respectively). Very good inter-observer agreement (kappa=0.81) on normalcy and good inter-observer agreement (kappa=0.73) on types of uroflowmetry were observed. Poor inter-observer agreement existed on the classification of specific types of abnormal uroflowmetry curves (kappa=0.07). Uroflowmetry is a good screening tool for normalcy of kindergarten children, while not a good tool to define the specific types of abnormal uroflowmetry.
Dumb-bell-shaped equilibrium figures for fiducial contact-binary asteroids and EKBOs
NASA Astrophysics Data System (ADS)
Descamps, Pascal
2015-01-01
In this work, we investigate the equilibrium figures of a dumb-bell-shaped sequence with which we are still not well acquainted. Studies have shown that these elongated and nonconvex figures may realistically replace the classic “Roche binary approximation” for modeling putative peanut-shaped or contact binary asteroids. The best-fit dumb-bell shapes, combined with the known rotational period of the objects, provide estimates of the bulk density of these objects. This new class of mathematical figures has been successfully tested on the observed light curves of three noteworthy small bodies: main-belt Asteroid 216 Kleopatra, Trojan Asteroid 624 Hektor and Edgeworth-Kuiper-belt object 2001 QG298. Using the direct observations of Kleopatra and Hektor obtained with high spatial resolution techniques and fitting the size of the dumb-bell-shaped solutions, we derived new physical characteristics in terms of equivalent radius, 62.5 ± 5 km and 92 ± 5 km, respectively, and bulk density, 4.4 ± 0.4 g cm-3 and 2.43 ± 0.35 g cm-3, respectively. In particular, the growing inadequacy of the radar shape model for interpreting any type of observations of Kleopatra (light curves, AO images, stellar occultations) in a satisfactory manner suggests that Kleopatra is more likely to be a dumb-bell-shaped object than a “dog-bone.”
IMPLEMENTING A NOVEL CYCLIC CO2 FLOOD IN PALEOZOIC REEFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
James R. Wood; W. Quinlan; A. Wylie
2003-07-01
Recycled CO2 will be used in this demonstration project to produce bypassed oil from the Silurian Charlton 6 pinnacle reef (Otsego County) in the Michigan Basin. Contract negotiations by our industry partner to gain access to this CO2 that would otherwise be vented to the atmosphere are near completion. A new method of subsurface characterization, log curve amplitude slicing, is being used to map facies distributions and reservoir properties in two reefs, the Belle River Mills and Chester 18 Fields. The Belle River Mills and Chester18 fields are being used as typefields because they have excellent log-curve and core datamore » coverage. Amplitude slicing of the normalized gamma ray curves is showing trends that may indicate significant heterogeneity and compartmentalization in these reservoirs. Digital and hard copy data continues to be compiled for the Niagaran reefs in the Michigan Basin. Technology transfer took place through technical presentations regarding the log curve amplitude slicing technique and a booth at the Midwest PTTC meeting.« less
Computationally Efficient Nonlinear Bell Inequalities for Quantum Networks
NASA Astrophysics Data System (ADS)
Luo, Ming-Xing
2018-04-01
The correlations in quantum networks have attracted strong interest with new types of violations of the locality. The standard Bell inequalities cannot characterize the multipartite correlations that are generated by multiple sources. The main problem is that no computationally efficient method is available for constructing useful Bell inequalities for general quantum networks. In this work, we show a significant improvement by presenting new, explicit Bell-type inequalities for general networks including cyclic networks. These nonlinear inequalities are related to the matching problem of an equivalent unweighted bipartite graph that allows constructing a polynomial-time algorithm. For the quantum resources consisting of bipartite entangled pure states and generalized Greenberger-Horne-Zeilinger (GHZ) states, we prove the generic nonmultilocality of quantum networks with multiple independent observers using new Bell inequalities. The violations are maximal with respect to the presented Tsirelson's bound for Einstein-Podolsky-Rosen states and GHZ states. Moreover, these violations hold for Werner states or some general noisy states. Our results suggest that the presented Bell inequalities can be used to characterize experimental quantum networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Karoly F.; Vertesi, Tamas
2010-08-15
The I{sub 3322} inequality is the simplest bipartite two-outcome Bell inequality beyond the Clauser-Horne-Shimony-Holt (CHSH) inequality, consisting of three two-outcome measurements per party. In the case of the CHSH inequality the maximal quantum violation can already be attained with local two-dimensional quantum systems; however, there is no such evidence for the I{sub 3322} inequality. In this paper a family of measurement operators and states is given which enables us to attain the maximum quantum value in an infinite-dimensional Hilbert space. Further, it is conjectured that our construction is optimal in the sense that measuring finite-dimensional quantum systems is not enoughmore » to achieve the true quantum maximum. We also describe an efficient iterative algorithm for computing quantum maximum of an arbitrary two-outcome Bell inequality in any given Hilbert space dimension. This algorithm played a key role in obtaining our results for the I{sub 3322} inequality, and we also applied it to improve on our previous results concerning the maximum quantum violation of several bipartite two-outcome Bell inequalities with up to five settings per party.« less
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
Growth and nonlinear response of driven water bells
NASA Astrophysics Data System (ADS)
Kolinski, John M.; Aharoni, Hillel; Fineberg, Jay; Sharon, Eran
2017-04-01
A water bell forms when a fluid jet impacts upon a target and separates into a two-dimensional sheet. Depending on the angle of separation from the target, the sheet can curve into a variety of different geometries. We show analytically that harmonic perturbations of water bells have linear wave solutions with geometry-dependent growth. We test the predictions of this model experimentally with a custom target system, and observe growth in agreement with the model below a critical forcing amplitude. Once the critical forcing amplitude is exceeded, a nonlinear transcritical bifurcation occurs; the response amplitude increases linearly with increasing forcing amplitude, albeit with a fundamentally different spatial form, and distinct nodes appear in the amplitude envelope.
Pang, Haosheng; Li, Minglin; Gao, Chenghui; Huang, Haili; Zhuo, Weirong; Hu, Jianyue; Wan, Yaling; Luo, Jing; Wang, Weidong
2018-03-27
The single-layer molybdenum disulfide (SLMoS2) nanosheets have been experimentally discovered to exist in two different polymorphs, which exhibit different electrical properties, metallic or semiconducting. Herein, molecular dynamics (MD) simulations of nanoindentation and uniaxial compression were conducted to investigate the phase transition of SLMoS2 nanosheets. Typical load-deflection curves, stress-strain curves, and local atomic structures were obtained. The loading force decreases sharply and then increases again at a critical deflection under the nanoindentation, which is inferred to the phase transition. In addition to the layer thickness, some related bond lengths and bond angles were also found to suddenly change as the phase transition occurs. A bell-like hollow, so-called residual deformation, was found to form, mainly due to the lattice distortion around the waist of the bell. The effect of indenter size on the residual hollow was also analyzed. Under the uniaxial compression along the armchair direction, a different phase transition, a uniformly quadrilateral structure, was observed when the strain is greater than 27.7%. The quadrilateral structure was found to be stable and exhibit metallic conductivity in view of the first-principle calculation.
Stellar by Day, Planetary by Night: Atmospheres of Ultra-Hot Jupiters
NASA Astrophysics Data System (ADS)
Hensley, Kerry
2018-06-01
Move over, hot Jupiters theres an even stranger kind of giant planet in the universe! Ultra-hot Jupiters are so strongly irradiated that the molecules in their atmospheres split apart. What does this mean for heat transport on these planets?Atmospheres of Exotic PlanetsA diagram showing the orbit of an ultra-hot Jupiter and the longitudes at which dissociation and recombination occur. [Bell Cowan 2018]Similar to hot Jupiters, ultra-hot Jupiters are gas giants with atmospheres dominated by molecular hydrogen. What makes them interesting is that their dayside atmospheres are so hot that the molecules dissociate into individual hydrogen atoms more like the atmospheres of stars than planets.Because of the intense stellar irradiation, there is also an extreme temperature difference between the day and night sides of these planets potentially more than 1,000 K! As the stellar irradiation increases, the dayside atmosphere becomes hotter and hotter and the temperature difference between the day and night sides increases.When hot atomic hydrogen is transported into cooler regions (by winds, for instance), it recombines to form H2 molecules and heats the gas, effectively transporting heat from one location to another. This is similar to how the condensation of water redistributes heat in Earths atmosphere but what effect does this phenomenon have on the atmospheres of ultra-hot Jupiters?Maps of atmospheric temperature of molecular hydrogen dissociation fraction for three wind speeds. Click to enlarge. [Bell Cowan 2018]Modeling Heat RedistributionTaylor Bell and Nicolas Cowan (McGill University) used an energy-balance model to estimate the effects of H2 dissociation and recombination on heat transport in ultra-hot Jupiter atmospheres. In particular, they explored the redistribution of heat and how it affects the resultant phase curve the curve that describes the combination of reflected and thermally emitted light from the planet, observed as a function of its phase angle.For reasonable eastward wind speeds, Bell and Cowan found that the recombination of atomic hydrogen shifts the peak of the phase curve in the eastward direction, with the shift becoming more pronounced with increasing eastward wind speed. Additionally, because heat is distributed more evenly across the planet, including this process decreases the amplitude of the phase variations.A Bright Future for Ultra-hot JupitersTheoretical phase curves for three wind speeds. Transits and eclipses have been neglected. [Bell Cowan 2018]While this simple model doesnt include potentially important effects such as the changing atmospheric opacity as a function of longitude or formation of clouds on the planets nightside, this result indicates that caution is required when interpreting phase curves of ultra-hot Jupiters. For example, neglecting recombination means assuming a lower heat transport efficiency, which will require artifically high wind speeds to match observed phase curves.Only a few ultra-hot Jupiters are currently known, but that will soon change. The Transiting Exoplanet Survey Satellite (TESS) mission, which is set to begin its first science observations on June 17, 2018, will search for exoplanets around bright stars, including nearby cool stars and more distant hot stars. The hot stars may play host to these exotic exoplanets, and upcoming observations of ultra-hot Jupiters like KELT-9b will put this theory of heat redistribution to the test.CitationTaylor J. Bell Nicolas B. Cowan 2018 ApJL 857 L20. doi:10.3847/2041-8213/aabcc8
A Comparison of Three Curve Intersection Algorithms
NASA Technical Reports Server (NTRS)
Sederberg, T. W.; Parry, S. R.
1985-01-01
An empirical comparison is made between three algorithms for computing the points of intersection of two planar Bezier curves. The algorithms compared are: the well known Bezier subdivision algorithm, which is discussed in Lane 80; a subdivision algorithm based on interval analysis due to Koparkar and Mudur; and an algorithm due to Sederberg, Anderson and Goldman which reduces the problem to one of finding the roots of a univariate polynomial. The details of these three algorithms are presented in their respective references.
Quantum Cryptography Based on the Deutsch-Jozsa Algorithm
NASA Astrophysics Data System (ADS)
Nagata, Koji; Nakamura, Tadao; Farouk, Ahmed
2017-09-01
Recently, secure quantum key distribution based on Deutsch's algorithm using the Bell state is reported (Nagata and Nakamura, Int. J. Theor. Phys. doi: 10.1007/s10773-017-3352-4, 2017). Our aim is of extending the result to a multipartite system. In this paper, we propose a highly speedy key distribution protocol. We present sequre quantum key distribution based on a special Deutsch-Jozsa algorithm using Greenberger-Horne-Zeilinger states. Bob has promised to use a function f which is of one of two kinds; either the value of f( x) is constant for all values of x, or else the value of f( x) is balanced, that is, equal to 1 for exactly half of the possible x, and 0 for the other half. Here, we introduce an additional condition to the function when it is balanced. Our quantum key distribution overcomes a classical counterpart by a factor O(2 N ).
Recognition of Protein-coding Genes Based on Z-curve Algorithms
-Biao Guo, Feng; Lin, Yan; -Ling Chen, Ling
2014-01-01
Recognition of protein-coding genes, a classical bioinformatics issue, is an absolutely needed step for annotating newly sequenced genomes. The Z-curve algorithm, as one of the most effective methods on this issue, has been successfully applied in annotating or re-annotating many genomes, including those of bacteria, archaea and viruses. Two Z-curve based ab initio gene-finding programs have been developed: ZCURVE (for bacteria and archaea) and ZCURVE_V (for viruses and phages). ZCURVE_C (for 57 bacteria) and Zfisher (for any bacterium) are web servers for re-annotation of bacterial and archaeal genomes. The above four tools can be used for genome annotation or re-annotation, either independently or combined with the other gene-finding programs. In addition to recognizing protein-coding genes and exons, Z-curve algorithms are also effective in recognizing promoters and translation start sites. Here, we summarize the applications of Z-curve algorithms in gene finding and genome annotation. PMID:24822027
Army Physical Therapy Productivity According to the Performance Based Adjustment Model
2008-05-02
variation in processes often fell along a bell shaped curve or normal distribution. Shewart later developed a control chart to track and analyze variation in...References Abdi, H. (2003). Partial regression coefficients. In M. Lewis-Beck, A . Bryman & T. Futing (Eds.), Encyclopedia of Social Sciences Research...other provision of law. no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a
Exact BPF and FBP algorithms for nonstandard saddle curves.
Yu, Hengyong; Zhao, Shiying; Ye, Yangbo; Wang, Ge
2005-11-01
A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better image quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.
The Trouble with the Curve: An Argument for the Abolishment of Norm-Referenced Evaluation
ERIC Educational Resources Information Center
Raymond, Gregory
2013-01-01
The norm-referenced evaluation system has been used to grade students, from elementary to post-secondary, for decades. However, the system itself is inherently flawed. Looking at the history of the norm-referenced system and its most famous tool, the Bell Curve, and taking examples from the author's own teaching experience, this paper examines the…
Tracking of Pacific walruses in the Chukchi Sea using a single hydrophone.
Mouy, Xavier; Hannay, David; Zykov, Mikhail; Martin, Bruce
2012-02-01
The vocal repertoire of Pacific walruses includes underwater sound pulses referred to as knocks and bell-like calls. An extended acoustic monitoring program was performed in summer 2007 over a large region of the eastern Chukchi Sea using autonomous seabed-mounted acoustic recorders. Walrus knocks were identified in many of the recordings and most of these sounds included multiple bottom and surface reflected signals. This paper investigates the use of a localization technique based on relative multipath arrival times (RMATs) for potential behavior studies. First, knocks are detected using a semi-automated kurtosis-based algorithm. Then RMATs are matched to values predicted by a ray-tracing model. Walrus tracks with vertical and horizontal movements were obtained. The tracks included repeated dives between 4.0 m and 15.5 m depth and a deep dive to the sea bottom (53 m). Depths at which bell-like sounds are produced, average knock production rate and source levels estimates of the knocks were determined. Bell sounds were produced at all depths throughout the dives. Average knock production rates varied from 59 to 75 knocks/min. Average source level of the knocks was estimated to 177.6 ± 7.5 dB re 1 μPa peak @ 1 m. © 2012 Acoustical Society of America
ERIC Educational Resources Information Center
Rodriguez-Rodriguez, Cristina; Amigo, Jose Manuel; Coello, Jordi; Maspoch, Santiago
2007-01-01
A spectrophotometric study of the acid-base equilibria of 8-hydroxyquinoline-5-sulfonic acid to describe the multivariate curve resolution-alternating least squares algorithm (MCR-ALS) is described. The algorithm provides a lot of information and hence is of great importance for the chemometrics research.
Nonlinear Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Everhart, Joel L.; Badavi, Forooz F.
1989-01-01
Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.
Exact BPF and FBP algorithms for nonstandard saddle curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu Hengyong; Zhao Shiying; Ye Yangbo
2005-11-15
A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better imagemore » quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.« less
Data reduction using cubic rational B-splines
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Piegl, Les A.
1992-01-01
A geometric method is proposed for fitting rational cubic B-spline curves to data that represent smooth curves including intersection or silhouette lines. The algorithm is based on the convex hull and the variation diminishing properties of Bezier/B-spline curves. The algorithm has the following structure: it tries to fit one Bezier segment to the entire data set and if it is impossible it subdivides the data set and reconsiders the subset. After accepting the subset the algorithm tries to find the longest run of points within a tolerance and then approximates this set with a Bezier cubic segment. The algorithm uses this procedure repeatedly to the rest of the data points until all points are fitted. It is concluded that the algorithm delivers fitting curves which approximate the data with high accuracy even in cases with large tolerances.
When the bell tolls on Bell's palsy: finding occult malignancy in acute-onset facial paralysis.
Quesnel, Alicia M; Lindsay, Robin W; Hadlock, Tessa A
2010-01-01
This study reports 4 cases of occult parotid malignancy presenting with sudden-onset facial paralysis to demonstrate that failure to regain tone 6 months after onset distinguishes these patients from Bell's palsy patients with delayed recovery and to propose a diagnostic algorithm for this subset of patients. A case series of 4 patients with occult parotid malignancies presenting with acute-onset unilateral facial paralysis is reported. Initial imaging on all 4 patients did not demonstrate a parotid mass. Diagnostic delays ranged from 7 to 36 months from time of onset of facial paralysis to time of diagnosis of parotid malignancy. Additional physical examination findings, especially failure to regain tone, as well as properly protocolled radiologic studies reviewed with dedicated head and neck radiologists, were helpful in arriving at the diagnosis. An algorithm to minimize diagnostic delays in this subset of acute facial paralysis patients is presented. Careful attention to facial tone, in addition to movement, is important in the diagnostic evaluation of acute-onset facial paralysis. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Barszcz, Eric; Mosher, Marianne; Huff, Edward M.
2004-01-01
Healthwatch-2 (HW-2) is a research tool designed to facilitate the development and testing of in-flight health monitoring algorithms. HW-2 software is written in C/C++ and executes on an x86-based computer running the Linux operating system. The executive module has interfaces for collecting various signal data, such as vibration, torque, tachometer, and GPS. It is designed to perform in-flight time or frequency averaging based on specifications defined in a user-supplied configuration file. Averaged data are then passed to a user-supplied algorithm written as a Matlab function. This allows researchers a convenient method for testing in-flight algorithms. In addition to its in-flight capabilities, HW-2 software is also capable of reading archived flight data and processing it as if collected in-flight. This allows algorithms to be developed and tested in the laboratory before being flown. Currently HW-2 has passed its checkout phase and is collecting data on a Bell OH-58C helicopter operated by the U.S. Army at NASA Ames Research Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vongehr, Sascha, E-mail: vongehr@usc.edu
There are increasingly suggestions for computer simulations of quantum statistics which try to violate Bell type inequalities via classical, common cause correlations. The Clauser–Horne–Shimony–Holt (CHSH) inequality is very robust. However, we argue that with the Einstein–Podolsky–Rosen setup, the CHSH is inferior to the Bell inequality, although and because the latter must assume anti-correlation of entangled photon singlet states. We simulate how often quantum behavior violates both inequalities, depending on the number of photons. Violating Bell 99% of the time is argued to be an ideal benchmark. We present hidden variables that violate the Bell and CHSH inequalities with 50% probability,more » and ones which violate Bell 85% of the time when missing 13% anti-correlation. We discuss how to present the quantum correlations to a wide audience and conclude that, when defending against claims of hidden classicality, one should demand numerical simulations and insist on anti-correlation and the full amount of Bell violation. -- Highlights: •The widely assumed superiority of the CHSH fails in the EPR problem. •We simulate Bell type inequalities behavior depending on the number of photons. •The core of Bell’s theorem in the EPR setup is introduced in a simple way understandable to a wide audience. •We present hidden variables that violate both inequalities with 50% probability. •Algorithms have been supplied in form of Mathematica programs.« less
Object-Image Correspondence for Algebraic Curves under Projections
NASA Astrophysics Data System (ADS)
Burdis, Joseph M.; Kogan, Irina A.; Hong, Hoon
2013-03-01
We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence p! roblem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.
Intelligence and the Social Scientist.
ERIC Educational Resources Information Center
Kass, Leon R.
1995-01-01
Uses the book, "The Bell Curve," to illustrate the problem of "dangerous knowledge" and its power to harm. The article examines what the book is saying about intelligence, its meaning to society, and the book's possible effect on radicalizing political thought. (GR)
Proposed algorithm for determining the delta intercept of a thermocouple psychrometer curve
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurzmack, M.A.
1993-07-01
The USGS Hydrologic Investigations Program is currently developing instrumentation to study the unsaturated zone at Yucca Mountain in Nevada. Surface-based boreholes up to 2,500 feet in depth will be drilled, and then instrumented in order to define the water potential field within the unsaturated zone. Thermocouple psychrometers will be used to monitor the in-situ water potential. An algorithm is proposed for simply and efficiently reducing a six wire thermocouple psychrometer voltage output curve to a single value, the delta intercept. The algorithm identifies a plateau region in the psychrometer curve and extrapolates a linear regression back to the initial startmore » of relaxation. When properly conditioned for the measurements being made, the algorithm results in reasonable results even with incomplete or noisy psychrometer curves over a 1 to 60 bar range.« less
A z-Vertex Trigger for Belle II
NASA Astrophysics Data System (ADS)
Skambraks, S.; Abudinén, F.; Chen, Y.; Feindt, M.; Frühwirth, R.; Heck, M.; Kiesling, C.; Knoll, A.; Neuhaus, S.; Paul, S.; Schieck, J.
2015-08-01
The Belle II experiment will go into operation at the upgraded SuperKEKB collider in 2016. SuperKEKB is designed to deliver an instantaneous luminosity L = 8 ×1035 cm - 2 s - 1. The experiment will therefore have to cope with a much larger machine background than its predecessor Belle, in particular from events outside of the interaction region. We present the concept of a track trigger, based on a neural network approach, that is able to suppress a large fraction of this background by reconstructing the z (longitudinal) position of the event vertex within the latency of the first level trigger. The trigger uses the hit information from the Central Drift Chamber (CDC) of Belle II within narrow cones in polar and azimuthal angle as well as in transverse momentum (“sectors”), and estimates the z-vertex without explicit track reconstruction. The preprocessing for the track trigger is based on the track information provided by the standard CDC trigger. It takes input from the 2D track finder, adds information from the stereo wires of the CDC, and finds the appropriate sectors in the CDC for each track. Within the sector, the z-vertex is estimated by a specialized neural network, with the drift times from the CDC as input and a continuous output corresponding to the scaled z-vertex. The neural algorithm will be implemented in programmable hardware. To this end a Virtex 7 FPGA board will be used, which provides at present the most promising solution for a fully parallelized implementation of neural networks or alternative multivariate methods. A high speed interface for external memory will be integrated into the platform, to be able to store the O(109) parameters required. The contribution presents the results of our feasibility studies and discusses the details of the envisaged hardware solution.
Rehabilitation of Bells' palsy from a multi-team perspective.
Hultcrantz, Malou
2016-01-01
Conclusions Defectively healed facial paralysis causes difficulties to talk and eat, involuntary spasms (synkinesis), and cosmetic deformities which can give rise both to severe psychological and physical trauma. A team consisting of Ear-Nose-Throat specialists, Plastic surgeons and Physiotherapists can offer better care, treatment and outcome for patients suffering from Bells' palsy. Objectives Patients suffering from Bells' palsy from all ENT hospitals in Sweden and the University Hospital in Helsinki has been included. Methods Results have been drawn and statistically processed for different outcomes from a prospective, double blind cross over study. Results from a pilot surgical study and therapeutic results from physiotherapy studies have been included. Ideas concerning different kinds of surgery will be reviewed and the role of physiotherapy discussed. Results According to common results, treatment with Prednisolone enhances the recovery rate and should, if possible, be used early in the course. Sunnybrook grading at 1 month after onset most accurately predicts non-recovery at 12 months in Bells' palsy and a risk factor curve will be presented in order to predict outcome and selection of patients for undergoing facial surgery. This report is focusing on how to handle patients with Bells' palsy from a multi-rehabilitation team point of view, and what will be recommended to provide these patients with the best clinical and surgical help.
[Facial palsy: diagnosis and management by primary care physicians].
Alvarez, V; Dussoix, P; Gaspoz, J-M
2009-01-28
The incidence of facial palsy is about 50/100000/year, i.e. 210 cases/year in Geneva. Clinicians can be puzzled by it, because it encompasses aetiologies with very diverse prognoses. Most patients suffer from Bell palsy that evolves favourably. Some, however, suffer from diseases such as meningitis, HIV infection, Lyme's disease, CVA, that require fast identification because of their severity and of the need for specific treatments. This article proposes an algorithm for pragmatic and evidence-based management of facial palsy.
Robust stochastic resonance: Signal detection and adaptation in impulsive noise
NASA Astrophysics Data System (ADS)
Kosko, Bart; Mitaim, Sanya
2001-11-01
Stochastic resonance (SR) occurs when noise improves a system performance measure such as a spectral signal-to-noise ratio or a cross-correlation measure. All SR studies have assumed that the forcing noise has finite variance. Most have further assumed that the noise is Gaussian. We show that SR still occurs for the more general case of impulsive or infinite-variance noise. The SR effect fades as the noise grows more impulsive. We study this fading effect on the family of symmetric α-stable bell curves that includes the Gaussian bell curve as a special case. These bell curves have thicker tails as the parameter α falls from 2 (the Gaussian case) to 1 (the Cauchy case) to even lower values. Thicker tails create more frequent and more violent noise impulses. The main feedback and feedforward models in the SR literature show this fading SR effect for periodic forcing signals when we plot either the signal-to-noise ratio or a signal correlation measure against the dispersion of the α-stable noise. Linear regression shows that an exponential law γopt(α)=cAα describes this relation between the impulsive index α and the SR-optimal noise dispersion γopt. The results show that SR is robust against noise ``outliers.'' So SR may be more widespread in nature than previously believed. Such robustness also favors the use of SR in engineering systems. We further show that an adaptive system can learn the optimal noise dispersion for two standard SR models (the quartic bistable model and the FitzHugh-Nagumo neuron model) for the signal-to-noise ratio performance measure. This also favors practical applications of SR and suggests that evolution may have tuned the noise-sensitive parameters of biological systems.
A Speech Controlled Information-Retrieval System,
1983-01-01
instance, monitoring the speed of articulation continuously could lead to a faster time warping algorithm by restricting the amount of overlapping of...M E (1975) "LEX - a lexical analyser generator" CSTR 39, Bell Laboratories. ’.
Monte Carlo study of molten salt with charge asymmetry near the electrode surface.
Kłos, Jacek; Lamperski, Stanisław
2014-02-07
Results of the Monte Carlo simulation of the electrode | molten salt or ionic liquid interface are reported. The system investigated is approximated by the primitive model of electrolyte being in contact with a charged hard wall. Ions differ in charges, namely anions are divalent and cations are monovalent but they are of the same diameter d = 400 pm. The temperature analysis of heat capacity at a constant volume Cv and the anion radial distribution function, g2-/2-, allowed the choice of temperature of the study, which is T = 2800 K and corresponds to T(*) = 0.34 (definition of reduced temperature T(*) in text). The differential capacitance curve of the interface with the molten salt or ionic liquid at c = 5.79 M has a distorted bell shape. It is shown that with increasing electrolyte concentration from c = 0.4 to 5 M the differential capacitance curves undergo transition from U shape to bell shape.
Curved-line search algorithm for ab initio atomic structure relaxation
NASA Astrophysics Data System (ADS)
Chen, Zhanghui; Li, Jingbo; Li, Shushen; Wang, Lin-Wang
2017-09-01
Ab initio atomic relaxations often take large numbers of steps and long times to converge, especially when the initial atomic configurations are far from the local minimum or there are curved and narrow valleys in the multidimensional potentials. An atomic relaxation method based on on-the-flight force learning and a corresponding curved-line search algorithm is presented to accelerate this process. Results demonstrate the superior performance of this method for metal and magnetic clusters when compared with the conventional conjugate-gradient method.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-29
... License Application for Bell Bend Nuclear Power Plant; Exemption 1.0 Background PPL Bell Bend, LLC... for Nuclear Power Plants.'' This reactor is to be identified as Bell Bend Nuclear Power Plant (BBNPP... based upon the U.S. EPR reference COL (RCOL) application for UniStar's Calvert Cliffs Nuclear Power...
Lever, Melissa; Lim, Hong-Sheng; Kruger, Philipp; Nguyen, John; Trendel, Nicola; Abu-Shah, Enas; Maini, Philip Kumar; van der Merwe, Philip Anton
2016-01-01
T cells must respond differently to antigens of varying affinity presented at different doses. Previous attempts to map peptide MHC (pMHC) affinity onto T-cell responses have produced inconsistent patterns of responses, preventing formulations of canonical models of T-cell signaling. Here, a systematic analysis of T-cell responses to 1 million-fold variations in both pMHC affinity and dose produced bell-shaped dose–response curves and different optimal pMHC affinities at different pMHC doses. Using sequential model rejection/identification algorithms, we identified a unique, minimal model of cellular signaling incorporating kinetic proofreading with limited signaling coupled to an incoherent feed-forward loop (KPL-IFF) that reproduces these observations. We show that the KPL-IFF model correctly predicts the T-cell response to antigen copresentation. Our work offers a general approach for studying cellular signaling that does not require full details of biochemical pathways. PMID:27702900
Lever, Melissa; Lim, Hong-Sheng; Kruger, Philipp; Nguyen, John; Trendel, Nicola; Abu-Shah, Enas; Maini, Philip Kumar; van der Merwe, Philip Anton; Dushek, Omer
2016-10-25
T cells must respond differently to antigens of varying affinity presented at different doses. Previous attempts to map peptide MHC (pMHC) affinity onto T-cell responses have produced inconsistent patterns of responses, preventing formulations of canonical models of T-cell signaling. Here, a systematic analysis of T-cell responses to 1 million-fold variations in both pMHC affinity and dose produced bell-shaped dose-response curves and different optimal pMHC affinities at different pMHC doses. Using sequential model rejection/identification algorithms, we identified a unique, minimal model of cellular signaling incorporating kinetic proofreading with limited signaling coupled to an incoherent feed-forward loop (KPL-IFF) that reproduces these observations. We show that the KPL-IFF model correctly predicts the T-cell response to antigen copresentation. Our work offers a general approach for studying cellular signaling that does not require full details of biochemical pathways.
Higher Harmonic Control for Tiltrotor Vibration Reduction
NASA Technical Reports Server (NTRS)
Nixon, Mark W.; Kvaternik, Raymond G.; Settle, T. Ben
1997-01-01
The results of a joint NASA/Army/Bell Helicopter Textron wind-tunnel test to assess the potential of higher harmonic control (HHC) for reducing vibrations in tiltrotor aircraft operating in the airplane mode of flight, and to evaluate the effectiveness of a Bell-developed HHC algorithm called MAVSS (Multipoint Adaptive Vibration Suppression System) are presented. The test was conducted in the Langley Transonic Dynamics Tunnel using an unpowered 1/5- scale semispan aeroelastic model of the V-22 which was modified to incorporate an HHC system employing both the rotor swashplate and the wing flaperon. The effectiveness of the swashplate and the flaperon acting either singly or in combination in reducing 1P and 3P wing vibrations over a wide range of tunnel airspeeds and rotor rotational speeds was demonstrated. The MAVSS algorithm was found to be robust to variations in tunnel airspeed and rotor speed, requiring only occasion-al on-line recalculations of the system transfer matrix.
Experimental data filtration algorithm
NASA Astrophysics Data System (ADS)
Oanta, E.; Tamas, R.; Danisor, A.
2017-08-01
Experimental data reduction is an important topic because the resulting information is used to calibrate the theoretical models and to verify the accuracy of their results. The paper presents some ideas used to extract a subset of points from the initial set of points which defines an experimentally acquired curve. The objective is to get a subset with significantly fewer points as the initial data set and which accurately defines a smooth curve that preserves the shape of the initial curve. Being a general study we used only data filtering criteria based geometric features that at a later stage may be related to upper level conditions specific to the phenomenon under investigation. Five algorithms were conceived and implemented in an original software consisting of more than 1800 computer code lines which has a flexible structure that allows us to easily update it using new algorithms. The software instrument was used to process the data of several case studies. Conclusions are drawn regarding the values of the parameters used in the algorithms to decide if a series of points may be considered either noise, or a relevant part of the curve. Being a general analysis, the result is a computer based trial-and-error method that efficiently solves this kind of problems.
IQ: Easy to Bash, Hard to Replace.
ERIC Educational Resources Information Center
Pyryt, Michael C.
1996-01-01
This article examines psychometric analysis regarding the viability and limits of IQ testing in the context of "The Bell Curve." It discusses eyeball analysis versus item analysis, mean differences, validity coefficients, general intelligence, and IQ and gifted education, and urges a search for intrapersonal and environmental catalysts…
Racial Research and Final Solutions.
ERIC Educational Resources Information Center
Rushton, J. Philippe
1997-01-01
Presents descriptions, and critiques seven books that cover racism, primarily from the hermeneutical viewpoint. Suggests that all seven were written in response to the "Bell Curve" (Herrnstein and Murray, 1994) and that they collectively argue that any new evidence of genetic determinism is inadmissible on the grounds that empirical work…
The Mounting Toll: Environment and the Loss of Young Talent.
ERIC Educational Resources Information Center
Johnson, Sylvia T.
1995-01-01
Argues that genetics, as popularized in "The Bell Curve" (Herrnstein and Murray, 1994), does not affect educational attainment and personal development, but environmental upheavals do. The environmental changes that effect educational and personal development are highlighted. It cautions that works involving pseudoscience, like "The…
Of Bell Curves, Gout, and Genius.
ERIC Educational Resources Information Center
Klotz, Irving M.
1995-01-01
A chemistry professor emeritus explains the misguided association between gout and genius. Gout, a genetic disease arising from overproduction of uric acid, was prevalent in many historical, upper-class male figures. Gout is equally prevalent in poor rural blacks. Since both populations probably suffered from ingesting lead-poisoned alcoholic…
Schooling Makes You Smarter: What Teachers Need to Know about IQ
ERIC Educational Resources Information Center
Nisbett, Richard E.
2013-01-01
In 1994, America took a giant step backward in understanding intelligence and how it can be cultivated. Richard Herrnstein, a psychology professor at Harvard University, and Charles Murray, a political scientist with the American Enterprise Institute, published "The Bell Curve," a best-selling book that was controversial among…
The African American Critique of White Supremacist Science.
ERIC Educational Resources Information Center
Jorgensen, Carl
1995-01-01
Excerpts writings of past African American intellectuals on the issue of presumptions of innate black mental inferiority, and applies their analyses to the scientific racism found in "The Bell Curve" (Herrnstein and Murray, 1994). Ideas for incorporating this critical tradition into current efforts, to prevent the resurgence of white…
"The Bell Curve" and Its Critical Progeny: A Review.
ERIC Educational Resources Information Center
Davis, Alan
1997-01-01
Discusses R. Herrnstein's and C. Murray's attempt to persuade an educated white readership that they, the readers, are genetically, socially, and intellectually superior. The most effective criticisms are those that rely on scientific evidence about the manipulation of data and flawed analyses rather than the display of moral outrage. (SLD)
Why Johnny Can Be Average Today.
ERIC Educational Resources Information Center
Sturrock, Alan
1997-01-01
During a (hypothetical) phone interview with a university researcher, an elementary principal reminisced about a lifetime of reading groups with unmemorable names, medium-paced math problems, patchworked social studies/science lessons, and totally "average" IQ and batting scores. The researcher hung up at the mention of bell-curved assembly lines…
ERIC Educational Resources Information Center
Siegel, Linda S.
1995-01-01
Responds to "The Bell Curve" by arguing that IQ is merely a statistical fiction, an artificial construct not corresponding to any real entity. Discusses the "seductive statistical trap of factor analysis" as it relates to IQ tests, multiple intelligences, content and bias of IQ tests, lack of validity of IQ tests for individual…
NASA Astrophysics Data System (ADS)
Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun
2017-11-01
In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.
Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.
Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu
2018-08-01
To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
Creative Tiling: A Story of 1000-and-1 Curves
ERIC Educational Resources Information Center
Al-Darwish, Nasir
2012-01-01
We describe a procedure that utilizes symmetric curves for building artistic tiles. One particular curve was found to mesh nicely with hundreds other curves, resulting in eye-catching tiling designs. The results of our work serve as a good example of using ideas from 2-D graphics and algorithms in a practical web-based application.
Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem.
Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing
2015-01-01
Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA.
Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem
Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing
2015-01-01
Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA. PMID:26167171
Bell Inequalities and Group Symmetry
NASA Astrophysics Data System (ADS)
Bolonek-Lasoń, Katarzyna
2017-12-01
Recently the method based on irreducible representations of finite groups has been proposed as a tool for investigating the more sophisticated versions of Bell inequalities (V. Ugǔr Gűney, M. Hillery, Phys. Rev. A90, 062121 ([2014]) and Phys. Rev. A91, 052110 ([2015])). In the present paper an example based on the symmetry group S 4 is considered. The Bell inequality violation due to the symmetry properties of regular tetrahedron is described. A nonlocal game based on the inequalities derived is described and it is shown that the violation of Bell inequality implies that the quantum strategies outperform their classical counterparts.
Inversion of Surface-wave Dispersion Curves due to Low-velocity-layer Models
NASA Astrophysics Data System (ADS)
Shen, C.; Xia, J.; Mi, B.
2016-12-01
A successful inversion relies on exact forward modeling methods. It is a key step to accurately calculate multi-mode dispersion curves of a given model in high-frequency surface-wave (Rayleigh wave and Love wave) methods. For normal models (shear (S)-wave velocity increasing with depth), their theoretical dispersion curves completely match the dispersion spectrum that is generated based on wave equation. For models containing a low-velocity-layer, however, phase velocities calculated by existing forward-modeling algorithms (e.g. Thomson-Haskell algorithm, Knopoff algorithm, fast vector-transfer algorithm and so on) fail to be consistent with the dispersion spectrum at a high frequency range. They will approach a value that close to the surface-wave velocity of the low-velocity-layer under the surface layer, rather than that of the surface layer when their corresponding wavelengths are short enough. This phenomenon conflicts with the characteristics of surface waves, which results in an erroneous inverted model. By comparing the theoretical dispersion curves with simulated dispersion energy, we proposed a direct and essential solution to accurately compute surface-wave phase velocities due to low-velocity-layer models. Based on the proposed forward modeling technique, we can achieve correct inversion for these types of models. Several synthetic data proved the effectiveness of our method.
Li, Kenli; Zou, Shuting; Xv, Jin
2008-01-01
Elliptic curve cryptographic algorithms convert input data to unrecognizable encryption and the unrecognizable data back again into its original decrypted form. The security of this form of encryption hinges on the enormous difficulty that is required to solve the elliptic curve discrete logarithm problem (ECDLP), especially over GF(2(n)), n in Z+. This paper describes an effective method to find solutions to the ECDLP by means of a molecular computer. We propose that this research accomplishment would represent a breakthrough for applied biological computation and this paper demonstrates that in principle this is possible. Three DNA-based algorithms: a parallel adder, a parallel multiplier, and a parallel inverse over GF(2(n)) are described. The biological operation time of all of these algorithms is polynomial with respect to n. Considering this analysis, cryptography using a public key might be less secure. In this respect, a principal contribution of this paper is to provide enhanced evidence of the potential of molecular computing to tackle such ambitious computations.
Li, Kenli; Zou, Shuting; Xv, Jin
2008-01-01
Elliptic curve cryptographic algorithms convert input data to unrecognizable encryption and the unrecognizable data back again into its original decrypted form. The security of this form of encryption hinges on the enormous difficulty that is required to solve the elliptic curve discrete logarithm problem (ECDLP), especially over GF(2n), n ∈ Z+. This paper describes an effective method to find solutions to the ECDLP by means of a molecular computer. We propose that this research accomplishment would represent a breakthrough for applied biological computation and this paper demonstrates that in principle this is possible. Three DNA-based algorithms: a parallel adder, a parallel multiplier, and a parallel inverse over GF(2n) are described. The biological operation time of all of these algorithms is polynomial with respect to n. Considering this analysis, cryptography using a public key might be less secure. In this respect, a principal contribution of this paper is to provide enhanced evidence of the potential of molecular computing to tackle such ambitious computations. PMID:18431451
SCEW: a Microsoft Excel add-in for easy creation of survival curves.
Khan, Haseeb Ahmad
2006-07-01
Survival curves are frequently used for reporting survival or mortality outcomes of experimental pharmacological/toxicological studies and of clinical trials. Microsoft Excel is a simple and widely used tool for creation of numerous types of graphic presentations however it is difficult to create step-wise survival curves in Excel. Considering the familiarity of clinicians and biomedical scientists with Excel, an algorithm survival curves in Excel worksheet (SCEW) has been developed for easy creation of survival curves directly in Excel worksheets. The algorithm has been integrated in the form of Excel add-in for easy installation and usage. The program is based on modification of frequency data for binary break-up using the spreadsheet formula functions whereas a macro subroutine automates the creation of survival curves. The advantages of this program are simple data input, minimal procedural steps and the creation of survival curves in the familiar confines of Excel.
Convergence properties of simple genetic algorithms
NASA Technical Reports Server (NTRS)
Bethke, A. D.; Zeigler, B. P.; Strauss, D. M.
1974-01-01
The essential parameters determining the behaviour of genetic algorithms were investigated. Computer runs were made while systematically varying the parameter values. Results based on the progress curves obtained from these runs are presented along with results based on the variability of the population as the run progresses.
Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin
2014-01-01
Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference.
Inferring Nonlinear Gene Regulatory Networks from Gene Expression Data Based on Distance Correlation
Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin
2014-01-01
Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference. PMID:24551058
ERIC Educational Resources Information Center
Gartrell, John; Marquez, Stephanie Amadeo
1995-01-01
Criticizes data analysis and interpretation in "The Bell Curve:" Herrnstein and Murray do not actually study the "cognitive elite"; do not control for education when examining effects of cognitive ability on occupational outcomes, ignore, cultural diversity within broad ethnic groups (Asian Americans, Latinos), ignore gender…
IQ and Crime: Dull Behavior and/or Misspecified Theory?
ERIC Educational Resources Information Center
Van Brunschot, Erin Gibbs; Brannigan, Augustine
1995-01-01
In response to "The Bell Curve," notes that the effects of IQ on crime and delinquency are mediated by gender and age in a fashion that is not readily explained by a reduction to genetic differences. Discusses possible interrelationships among IQ, delinquency, and school performance, and suggests that the causal link between IQ and…
Shifting the Bell Curve: The Benefits and Costs of Raising Student Achievement
ERIC Educational Resources Information Center
Yeh, Stuart S.
2009-01-01
Benefit-cost analysis was conducted to estimate the increase in earnings, increased tax revenues, value of less crime, and reductions in welfare costs attributable to nationwide implementation of rapid assessment, a promising intervention for raising student achievement in math and reading. Results suggest that social benefits would exceed total…
ERIC Educational Resources Information Center
Baker, Frank B.
1997-01-01
Examined the sampling distributions of equating coefficients produced by the characteristic curve method for tests using graded and nominal response scoring using simulated data. For both models and across all three equating situations, the sampling distributions were generally bell-shaped and peaked, and occasionally had a small degree of…
Thayer, Sarah; Bell, Christopher; McDonald, Craig M
2017-06-01
A Duchenne muscular dystrophy (DMD) cohort was identified using a claims-based algorithm to estimate health care utilization and costs for commercially insured DMD patients in the United States. Previous analyses have used broad diagnosis codes that include a range of muscular dystrophy types as a proxy to estimate the burden of DMD. To estimate DMD-associated resource utilization and costs in a sample of patients identified via a claims-based algorithm using diagnosis codes, pharmacy prescriptions, and procedure codes unique to DMD management based on DMD clinical milestones. DMD patients were selected from a commercially insured claims database (2000-2009). Patients with claims suggestive of a non-DMD diagnosis or who were aged 30 years or older were excluded. Each DMD patient was matched by age, gender, and region to controls without DMD in a 1:10 ratio (DMD patients n = 75; controls n = 750). All-cause health care resource utilization, including emergency department, inpatient, outpatient, and physician office visits, and all-cause health care costs were examined over a minimum 1-year period. Costs were computed as total health-plan and patient-paid amounts of adjudicated medical claims (in annualized U.S. dollars). The average age of the DMD cohort was 13 years. Patients in the DMD cohort had a 10-fold increase in health care costs compared with controls ($23,005 vs. $2,277, P < 0.001). Health care costs were significantly higher for the DMD cohort across age strata and, in particular, for DMD patients aged 14-29 years ($40,132 vs. $2,746, P < 0.001). In the United States, resource use and medical costs of DMD are substantial and increase with age. Funding for this study (GHO-10-4441) was provided by GlaxoSmithKline (GSK). Optum was contracted by GSK to conduct the study. Thayer was an employee of Optum Health Economics and Outcomes Research at the time of this study and was not compensated for her participation as an author of this manuscript. Bell is an employee and shareholder of GSK. McDonald has been a consultant for GSK, Sarepta, PTC Therapeutics, Biomarin, and Catabasis on clinical trials regarding Duchenne muscular dystrophy clinical trial design, endpoint selection, and data analysis; Mitobridge for drug development; and Eli Lilly as part of a steering committee for clinical trials. Study concept and design were contributed primarily by Bell, along with Thayer and McDonald. Thayer collected the data, and data interpretation was performed by Thayer and Bell, along with McDonald. The manuscript was written by Thayer and Bell, along with McDonald, and revised by all the authors.
Practical sliced configuration spaces for curved planar pairs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacks, E.
1999-01-01
In this article, the author presents a practical configuration-space computation algorithm for pairs of curved planar parts, based on the general algorithm developed by Bajaj and the author. The general algorithm advances the theoretical understanding of configuration-space computation, but is too slow and fragile for some applications. The new algorithm solves these problems by restricting the analysis to parts bounded by line segments and circular arcs, whereas the general algorithm handles rational parametric curves. The trade-off is worthwhile, because the restricted class handles most robotics and mechanical engineering applications. The algorithm reduces run time by a factor of 60 onmore » nine representative engineering pairs, and by a factor of 9 on two human-knee pairs. It also handles common special pairs by specialized methods. A survey of 2,500 mechanisms shows that these methods cover 90% of pairs and yield an additional factor of 10 reduction in average run time. The theme of this article is that application requirements, as well as intrinsic theoretical interest, should drive configuration-space research.« less
Structure and transport properties of nanostructured materials.
Sonwane, C G; Li, Q
2005-03-31
In the present manuscript, we have presented the simulation of nanoporous aluminum oxide using a molecular-dynamics approach with recently developed dynamic charge transfer potential using serial/parallel programming techniques (Streitz and Mintmire Phys. Rev. B 1994, 50, 11996). The structures resembling recently invented ordered nanoporous crystalline material, MCM-41/SBA-15 (Kresge et al. Nature 1992, 359, 710), and inverted porous solids (hollow nanospheres) with up to 10 000 atoms were fabricated and studied in the present work. These materials have been used for separation of gases and catalysis. On several occasions including the design of the reactor, the knowledge of surface diffusion is necessary. In the present work, a new method for estimating surface transport of gases based on a hybrid Monte Carlo method with unbiased random walk of tracer atom on the pore surface has been introduced. The nonoverlapping packings used in the present work were fabricated using an algorithm of very slowly settling rigid spheres from a dilute suspension into a randomly packed bed. The algorithm was modified to obtain unimodal, homogeneous Gaussian and segregated bimodal porous solids. The porosity of these solids was varied by densification using an arbitrary function or by coarsening from a highly densified pellet. The surface tortuosity for the densified solids indicated an inverted bell shape curve consistent with the fact that at very high porosities there is a reduction in the connectivity while at low porosities the pores become inaccessible or dead-end. The first passage time distribution approach was found to be more efficient in terms of computation time (fewer tracer atoms needed for the linearity of Einstein's plot). Results by hybrid discrete-continuum simulations were close to the discrete simulations for a boundary layer thickness of 5lambda.
Initial assessment of facial nerve paralysis based on motion analysis using an optical flow method.
Samsudin, Wan Syahirah W; Sundaraj, Kenneth; Ahmad, Amirozi; Salleh, Hasriah
2016-01-01
An initial assessment method that can classify as well as categorize the severity of paralysis into one of six levels according to the House-Brackmann (HB) system based on facial landmarks motion using an Optical Flow (OF) algorithm is proposed. The desired landmarks were obtained from the video recordings of 5 normal and 3 Bell's Palsy subjects and tracked using the Kanade-Lucas-Tomasi (KLT) method. A new scoring system based on the motion analysis using area measurement is proposed. This scoring system uses the individual scores from the facial exercises and grades the paralysis based on the HB system. The proposed method has obtained promising results and may play a pivotal role towards improved rehabilitation programs for patients.
Bell Correlations in a Many-Body System with Finite Statistics
NASA Astrophysics Data System (ADS)
Wagner, Sebastian; Schmied, Roman; Fadel, Matteo; Treutlein, Philipp; Sangouard, Nicolas; Bancal, Jean-Daniel
2017-10-01
A recent experiment reported the first violation of a Bell correlation witness in a many-body system [Science 352, 441 (2016)]. Following discussions in this Letter, we address here the question of the statistics required to witness Bell correlated states, i.e., states violating a Bell inequality, in such experiments. We start by deriving multipartite Bell inequalities involving an arbitrary number of measurement settings, two outcomes per party and one- and two-body correlators only. Based on these inequalities, we then build up improved witnesses able to detect Bell correlated states in many-body systems using two collective measurements only. These witnesses can potentially detect Bell correlations in states with an arbitrarily low amount of spin squeezing. We then establish an upper bound on the statistics needed to convincingly conclude that a measured state is Bell correlated.
Bell Correlations in a Many-Body System with Finite Statistics.
Wagner, Sebastian; Schmied, Roman; Fadel, Matteo; Treutlein, Philipp; Sangouard, Nicolas; Bancal, Jean-Daniel
2017-10-27
A recent experiment reported the first violation of a Bell correlation witness in a many-body system [Science 352, 441 (2016)]. Following discussions in this Letter, we address here the question of the statistics required to witness Bell correlated states, i.e., states violating a Bell inequality, in such experiments. We start by deriving multipartite Bell inequalities involving an arbitrary number of measurement settings, two outcomes per party and one- and two-body correlators only. Based on these inequalities, we then build up improved witnesses able to detect Bell correlated states in many-body systems using two collective measurements only. These witnesses can potentially detect Bell correlations in states with an arbitrarily low amount of spin squeezing. We then establish an upper bound on the statistics needed to convincingly conclude that a measured state is Bell correlated.
High dynamic range algorithm based on HSI color space
NASA Astrophysics Data System (ADS)
Zhang, Jiancheng; Liu, Xiaohua; Dong, Liquan; Zhao, Yuejin; Liu, Ming
2014-10-01
This paper presents a High Dynamic Range algorithm based on HSI color space. To keep hue and saturation of original image and conform to human eye vision effect is the first problem, convert the input image data to HSI color space which include intensity dimensionality. To raise the speed of the algorithm is the second problem, use integral image figure out the average of every pixel intensity value under a certain scale, as local intensity component of the image, and figure out detail intensity component. To adjust the overall image intensity is the third problem, we can get an S type curve according to the original image information, adjust the local intensity component according to the S type curve. To enhance detail information is the fourth problem, adjust the detail intensity component according to the curve designed in advance. The weighted sum of local intensity component after adjusted and detail intensity component after adjusted is final intensity. Converting synthetic intensity and other two dimensionality to output color space can get final processed image.
NASA Astrophysics Data System (ADS)
Weber, M. E.; Reichelt, L.; Kuhn, G.; Pfeiffer, M.; Korff, B.; Thurow, J.; Ricken, W.
2010-03-01
We present tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray scale curves from images at pixel resolution. The PEAK tool uses the gray scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a smoothed curve. The zero-crossing algorithm counts every positive and negative halfway passage of the curve through a wide moving average, separating the record into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the curve into its frequency components before counting positive and negative passages. The algorithms are available at doi:10.1594/PANGAEA.729700. We applied the new methods successfully to tree rings, to well-dated and already manually counted marine varves from Saanich Inlet, and to marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that laminations in Weddell Sea sites represent varves, deposited continuously over several millennia during the last glacial maximum. The new tools offer several advantages over previous methods. The counting procedures are based on a moving average generated from gray scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Also, the PEAK tool measures the thickness of each year or season. Since all information required is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.
Quantum Bell inequalities from macroscopic locality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tzyh Haur; Sheridan, Lana; Navascues, Miguel
2011-02-15
We propose a method to generate analytical quantum Bell inequalities based on the principle of macroscopic locality. By imposing locality over binary processings of virtual macroscopic intensities, we establish a correspondence between Bell inequalities and quantum Bell inequalities in bipartite scenarios with dichotomic observables. We discuss how to improve the latter approximation and how to extend our ideas to scenarios with more than two outcomes per setting.
Spectral unmixing of agents on surfaces for the Joint Contaminated Surface Detector (JCSD)
NASA Astrophysics Data System (ADS)
Slamani, Mohamed-Adel; Chyba, Thomas H.; LaValley, Howard; Emge, Darren
2007-09-01
ITT Corporation, Advanced Engineering and Sciences Division, is currently developing the Joint Contaminated Surface Detector (JCSD) technology under an Advanced Concept Technology Demonstration (ACTD) managed jointly by the U.S. Army Research, Development, and Engineering Command (RDECOM) and the Joint Project Manager for Nuclear, Biological, and Chemical Contamination Avoidance for incorporation on the Army's future reconnaissance vehicles. This paper describes the design of the chemical agent identification (ID) algorithm associated with JCSD. The algorithm detects target chemicals mixed with surface and interferent signatures. Simulated data sets were generated from real instrument measurements to support a matrix of parameters based on a Design Of Experiments approach (DOE). Decisions based on receiver operating characteristics (ROC) curves and area-under-the-curve (AUC) measures were used to down-select between several ID algorithms. Results from top performing algorithms were then combined via a fusion approach to converge towards optimum rates of detections and false alarms. This paper describes the process associated with the algorithm design and provides an illustrating example.
Unsupervised classification of variable stars
NASA Astrophysics Data System (ADS)
Valenzuela, Lucas; Pichara, Karim
2018-03-01
During the past 10 years, a considerable amount of effort has been made to develop algorithms for automatic classification of variable stars. That has been primarily achieved by applying machine learning methods to photometric data sets where objects are represented as light curves. Classifiers require training sets to learn the underlying patterns that allow the separation among classes. Unfortunately, building training sets is an expensive process that demands a lot of human efforts. Every time data come from new surveys; the only available training instances are the ones that have a cross-match with previously labelled objects, consequently generating insufficient training sets compared with the large amounts of unlabelled sources. In this work, we present an algorithm that performs unsupervised classification of variable stars, relying only on the similarity among light curves. We tackle the unsupervised classification problem by proposing an untraditional approach. Instead of trying to match classes of stars with clusters found by a clustering algorithm, we propose a query-based method where astronomers can find groups of variable stars ranked by similarity. We also develop a fast similarity function specific for light curves, based on a novel data structure that allows scaling the search over the entire data set of unlabelled objects. Experiments show that our unsupervised model achieves high accuracy in the classification of different types of variable stars and that the proposed algorithm scales up to massive amounts of light curves.
Can Khan Move the Bell Curve to the Right?
ERIC Educational Resources Information Center
Kronholz, June
2012-01-01
This article features Khan Academy which offers an online math program and short video lectures embedded in the "module", or math concept, that fit students' goals. By now, more than 1 million people have watched the online video in which Salman Khan--a charming MIT math whiz, Harvard Business School graduate, and former Boston hedge-fund…
Assessment and the Learning Brain: What the Research Tells Us
ERIC Educational Resources Information Center
Hardiman, Mariale; Whitman, Glenn
2014-01-01
If you really want to see how innovative a school is, inquire about its thinking and practices regarding assessment. For the students, does the mere thought of assessment trigger stress? Do the teachers rely heavily on high-stakes, multiple-choice, Bell Curve-generating tests? Or do the students seem relaxed and engaged as teachers experiment with…
49. INTERIOR VIEW OF HARDENER AREA SHOWING GAUGE THAT MEASURES ...
49. INTERIOR VIEW OF HARDENER AREA SHOWING GAUGE THAT MEASURES HARDNESS, THE NAIL MUST BREAK IN THE CENTER RANGE OF THE CURVED BAR TO HAVE THE CORRECT HARDNESS (THE NAIL WILL BREAK TOO EASILY IF TOO HARD AND WILL BEND TOO MUCH IF TOO SOFT) - LaBelle Iron Works, Thirtieth & Wood Streets, Wheeling, Ohio County, WV
Learning Alternatives and Strategies for Students Who Are Struggling
ERIC Educational Resources Information Center
Johnston, Don C.
2008-01-01
Much of what happens in the learning process focuses on teaching to the average student, but the bell curve has deflated with fewer students in the middle, making the educational dilemma more about "how to connect" with students regardless of diverse abilities and various backgrounds. According to the "Twenty-Fourth Annual Report to Congress" by…
For Whom Does "The Bell Curve" Toll"? It Tolls for You.
ERIC Educational Resources Information Center
Sternberg, Robert J.
Although British psychologist Francis Galton lost the battle for the definition of intelligence in his own time, his views live on in the work of Richard Herrnstein and Charles Murray. They argue that the Intelligence Quotient (IQ) is an adequate measure of intelligence, and that IQ is highly heritable. They contend that there are racial and…
A PILOT STUDY OF DIAGNOSTIC NEUROMUSCULAR ULTRASOUND IN BELL'S PALSY
TAWFIK, EMAN A.; WALKER, FRANCIS O.; CARTWRIGHT, MICHAEL S.
2015-01-01
Background and purpose Neuromuscular ultrasound of the cranial nerves is an emerging field which may help in the assessment of cranial neuropathies. The aim of this study was to evaluate the role of neuromuscular ultrasound in Bell's palsy. A second objective was to assess the possibility of any associated vagus nerve abnormality. Methods Twenty healthy controls and 12 Bell's palsy patients were recruited. The bilateral facial nerves, vagus nerves, and frontalis muscles were scanned using an 18 MHz linear array transducer. Facial nerve diameter, vagus nerve cross-sectional area, and frontalis thickness were measured. Results Mean facial nerve diameter was 0.8 ± 0.2 mm in controls and 1.1 ± 0.3 mm in patients group. The facial nerve diameter was significantly larger in patients than controls (p = 0.006, 95% CI for the difference between groups of 0.12-0.48), with a significant side-to-side difference in patients as well (p = 0.004, 95% CI for side-to-side difference of 0.08-0.52). ROC curve analysis of the absolute facial nerve diameter revealed a sensitivity of 75 % and a specificity of 70 %. No significant differences in vagus nerve cross-sectional area or frontalis thickness were detected between patients and controls. Conclusions Ultrasound can detect facial nerve enlargement in Bell's palsy and may have a role in assessment, or follow-up, of Bell's palsy and other facial nerve disorders. The low sensitivity of the current technique precludes its routine use for diagnosis, however, this study demonstrates its validity and potential for future research. PMID:26076910
Experimental Investigations of Generalized Predictive Control for Tiltrotor Stability Augmentation
NASA Technical Reports Server (NTRS)
Nixon, Mark W.; Langston, Chester W.; Singleton, Jeffrey D.; Piatak, David J.; Kvaternik, Raymond G.; Bennett, Richard L.; Brown, Ross K.
2001-01-01
A team of researchers from the Army Research Laboratory, NASA Langley Research Center (LaRC), and Bell Helicopter-Textron, Inc. have completed hover-cell and wind-tunnel testing of a 1/5-size aeroelastically-scaled tiltrotor model using a new active control system for stability augmentation. The active system is based on a generalized predictive control (GPC) algorithm originally developed at NASA LaRC in 1997 for un-known disturbance rejection. Results of these investigations show that GPC combined with an active swashplate can significantly augment the damping and stability of tiltrotors in both hover and high-speed flight.
Du, Jia; Younes, Laurent; Qiu, Anqi
2011-01-01
This paper introduces a novel large deformation diffeomorphic metric mapping algorithm for whole brain registration where sulcal and gyral curves, cortical surfaces, and intensity images are simultaneously carried from one subject to another through a flow of diffeomorphisms. To the best of our knowledge, this is the first time that the diffeomorphic metric from one brain to another is derived in a shape space of intensity images and point sets (such as curves and surfaces) in a unified manner. We describe the Euler–Lagrange equation associated with this algorithm with respect to momentum, a linear transformation of the velocity vector field of the diffeomorphic flow. The numerical implementation for solving this variational problem, which involves large-scale kernel convolution in an irregular grid, is made feasible by introducing a class of computationally friendly kernels. We apply this algorithm to align magnetic resonance brain data. Our whole brain mapping results show that our algorithm outperforms the image-based LDDMM algorithm in terms of the mapping accuracy of gyral/sulcal curves, sulcal regions, and cortical and subcortical segmentation. Moreover, our algorithm provides better whole brain alignment than combined volumetric and surface registration (Postelnicu et al., 2009) and hierarchical attribute matching mechanism for elastic registration (HAMMER) (Shen and Davatzikos, 2002) in terms of cortical and subcortical volume segmentation. PMID:21281722
Developing Performance Measures for Army Aviation Collective Training
2011-05-01
simulation-based training, such as ATX, is determined by performance improvement of participants within the virtual-training environment (Bell & Waag ...of the collective behavior (Bell & Waag , 1998). In ATX, system-based (i.e., simulator) data can be used to extract measures such as timing of events...to CABs. 20 21 References Bell, H. H., & Waag , W. L. (1998). Evaluating the effectiveness of flight simulators for training combat
Recent developments in software for the Belle II aerogel RICH
NASA Astrophysics Data System (ADS)
Šantelj, L.; Adachi, I.; Dolenec, R.; Hataya, K.; Iori, S.; Iwata, S.; Kakuno, H.; Kataura, R.; Kawai, H.; Kindo, H.; Kobayashi, T.; Korpar, S.; Križan, P.; Kumita, T.; Mrvar, M.; Nishida, S.; Ogawa, K.; Ogawa, S.; Pestotnik, R.; Sumiyoshi, T.; Tabata, M.; Yonenaga, M.; Yusa, Y.
2017-12-01
For the Belle II spectrometer a proximity focusing RICH counter with an aerogel radiator (ARICH) will be employed as a PID system in the forward end-cap region of the spectrometer. The detector will provide about 4σ separation of pions and kaons up to momenta of 3.5 GeV/c, at the kinematic limits of the experiment. We present the up-to-date status of the ARICH simulation and reconstruction software, focusing on the recent improvements of the reconstruction algorithms and detector description in the Geant4 simulation. In addition, as a demonstration of detector readout software functionality we show the first cosmic ray Cherenkov rings observed in the ARICH.
Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.
Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen
2017-11-01
A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.
NASA Technical Reports Server (NTRS)
Vo, San C.; Biegel, Bryan (Technical Monitor)
2001-01-01
Scalar multiplication is an essential operation in elliptic curve cryptosystems because its implementation determines the speed and the memory storage requirements. This paper discusses some improvements on two popular signed window algorithms for implementing scalar multiplications of an elliptic curve point - Morain-Olivos's algorithm and Koyarna-Tsuruoka's algorithm.
Stone, S R; Morrison, J F
1983-06-29
Binding theory has been developed for the reaction of an ionizing enzyme with an ionizing ligand. Consideration has been given to the most general scheme in which all possible reactions and interconversions occur as well as to schemes in which certain interactions do not take place. Equations have been derived in terms of the variation of the apparent dissociation constant (Kiapp) as a function of pH. These equations indicate that plots of pKiapp against pH can be wave-, half-bell- or bell-shaped according to the reactions involved. A wave is obtained whenever there is formation of the enzyme-ligand complexes, ionized enzyme . ionized ligand and protonated enzyme . protonated ligand. The additional formation of singly protonated enzyme-ligand complexes does not affect the wave form of the plot, but can influence the shape of the overall curve. The formation of either ionized enzyme . ionized ligand or protonated enzyme . protonated ligand, with or without singly protonated enzyme-ligand species, gives rise to a half-bell-shaped plot. If only singly protonated enzyme-ligand complexes are formed the plots are bell-shaped, but it is not possible to deduce the ionic forms of the reactants that participate in complex formation. Depending on the reaction pathways, true values for the ionization and dissociation constants may or may not be determined.
NASA Astrophysics Data System (ADS)
Xu, Lili; Luo, Shuqian
2010-11-01
Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.
Xu, Lili; Luo, Shuqian
2010-01-01
Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.
Reich, Stephen G
2017-04-01
Bell's palsy is a common outpatient problem, and while the diagnosis is usually straightforward, a number of diagnostic pitfalls can occur, and a lengthy differential diagnosis exists. Recognition and management of Bell's palsy relies on knowledge of the anatomy and function of the various motor and nonmotor components of the facial nerve. Avoiding diagnostic pitfalls relies on recognizing red flags or features atypical for Bell's palsy, suggesting an alternative cause of peripheral facial palsy. The first American Academy of Neurology (AAN) evidence-based review on the treatment of Bell's palsy in 2001 concluded that corticosteroids were probably effective and that the antiviral acyclovir was possibly effective in increasing the likelihood of a complete recovery from Bell's palsy. Subsequent studies led to a revision of these recommendations in the 2012 evidence-based review, concluding that corticosteroids, when used shortly after the onset of Bell's palsy, were "highly likely" to increase the probability of recovery of facial weakness and should be offered; the addition of an antiviral to steroids may increase the likelihood of recovery but, if so, only by a very modest effect. Bell's palsy is characterized by the spontaneous acute onset of unilateral peripheral facial paresis or palsy in isolation, meaning that no features from the history, neurologic examination, or head and neck examination suggest a specific or alternative cause. In this setting, no further testing is necessary. Even without treatment, the outcome of Bell's palsy is favorable, but treatment with corticosteroids significantly increases the likelihood of improvement.
NASA Astrophysics Data System (ADS)
Jough, Fooad Karimi Ghaleh; Şensoy, Serhan
2016-12-01
Different performance levels may be obtained for sideway collapse evaluation of steel moment frames depending on the evaluation procedure used to handle uncertainties. In this article, the process of representing modelling uncertainties, record to record (RTR) variations and cognitive uncertainties for moment resisting steel frames of various heights is discussed in detail. RTR uncertainty is used by incremental dynamic analysis (IDA), modelling uncertainties are considered through backbone curves and hysteresis loops of component, and cognitive uncertainty is presented in three levels of material quality. IDA is used to evaluate RTR uncertainty based on strong ground motion records selected by the k-means algorithm, which is favoured over Monte Carlo selection due to its time saving appeal. Analytical equations of the Response Surface Method are obtained through IDA results by the Cuckoo algorithm, which predicts the mean and standard deviation of the collapse fragility curve. The Takagi-Sugeno-Kang model is used to represent material quality based on the response surface coefficients. Finally, collapse fragility curves with the various sources of uncertainties mentioned are derived through a large number of material quality values and meta variables inferred by the Takagi-Sugeno-Kang fuzzy model based on response surface method coefficients. It is concluded that a better risk management strategy in countries where material quality control is weak, is to account for cognitive uncertainties in fragility curves and the mean annual frequency.
ERIC Educational Resources Information Center
McKnight, Lucinda
2018-01-01
This article looks to three inspirational Black women, bell hooks, Stacey McBride-Irby and Patricia Williams, in the pursuit of radical curriculum. While today curriculum is critiqued as racialised, gendered, sexualised and classed, the formats of curriculum documents such as text books, units of work and lesson plans have changed little. These…
Can Khan Move the Bell Curve to the Right?
ERIC Educational Resources Information Center
Kronholz, June
2012-01-01
More than 1 million people have watched the online video in which Salman Khan--a charming MIT math whiz, Harvard Business School graduate, and former Boston hedge-fund analyst--explains how he began tutoring his cousins in math by posting short lessons for them on YouTube. Other people began watching the lessons and sending Khan adulatory notes.…
Schools of Quality: An Introduction to Total Quality Management in Education.
ERIC Educational Resources Information Center
Bonstingl, John Jay
This book offers an introduction to the basic ideas of Total Quality Management (TQM) in education. Chapter 1 contrasts the American model of the bell-shaped curve with the Japanese concept of "kaizen," which is personal dedication to mutual improvement and the heart of TQM philosophy. Chapter 2 provides an overview of the history of the TQ…
Improving the growth of CZT crystals for radiation detectors: a modeling perspective
NASA Astrophysics Data System (ADS)
Derby, Jeffrey J.; Zhang, Nan; Yeckel, Andrew
2012-10-01
The availability of large, single crystals of cadmium zinc telluride (CZT) with uniform properties is key to improving the performance of gamma radiation detectors fabricated from them. Towards this goal, we discuss results obtained by computational models that provide a deeper understanding of crystal growth processes and how the growth of CZT can be improved. In particular, we discuss methods that may be implemented to lessen the deleterious interactions between the ampoule wall and the growing crystal via engineering a convex solidification interface. For vertical Bridgman growth, a novel, bell-curve furnace temperature profile is predicted to achieve macroscopically convex solid-liquid interface shapes during melt growth of CZT in a multiple-zone furnace. This approach represents a significant advance over traditional gradient-freeze profiles, which always yield concave interface shapes, and static heat transfer designs, such as pedestal design, that achieve convex interfaces over only a small portion of the growth run. Importantly, this strategy may be applied to any Bridgman configuration that utilizes multiple, controllable heating zones. Realizing a convex solidification interface via this adaptive bell-curve furnace profile is postulated to result in better crystallinity and higher yields than conventional CZT growth techniques.
Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
W. Hasan, W. Z.
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554
Evaluation schemes for video and image anomaly detection algorithms
NASA Astrophysics Data System (ADS)
Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael
2016-05-01
Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.
An experimental comparison of various methods of nearfield acoustic holography
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
2017-05-19
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
An experimental comparison of various methods of nearfield acoustic holography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
Curve Set Feature-Based Robust and Fast Pose Estimation Algorithm
Hashimoto, Koichi
2017-01-01
Bin picking refers to picking the randomly-piled objects from a bin for industrial production purposes, and robotic bin picking is always used in automated assembly lines. In order to achieve a higher productivity, a fast and robust pose estimation algorithm is necessary to recognize and localize the randomly-piled parts. This paper proposes a pose estimation algorithm for bin picking tasks using point cloud data. A novel descriptor Curve Set Feature (CSF) is proposed to describe a point by the surface fluctuation around this point and is also capable of evaluating poses. The Rotation Match Feature (RMF) is proposed to match CSF efficiently. The matching process combines the idea of the matching in 2D space of origin Point Pair Feature (PPF) algorithm with nearest neighbor search. A voxel-based pose verification method is introduced to evaluate the poses and proved to be more than 30-times faster than the kd-tree-based verification method. Our algorithm is evaluated against a large number of synthetic and real scenes and proven to be robust to noise, able to detect metal parts, more accurately and more than 10-times faster than PPF and Oriented, Unique and Repeatable (OUR)-Clustered Viewpoint Feature Histogram (CVFH). PMID:28771216
A general practice approach to Bell's palsy.
Phan, Nga T; Panizza, Benedict; Wallwork, Benjamin
2016-11-01
Bell's palsy is characterised by an acute onset of unilateral, lower motor neuron weakness of the facial nerve in the absence of an identifiable cause. Establishing the correct diagnosis is imperative and choosing the correct treatment options can optimise the likelihood of recovery. This article summarises our understanding of Bell's palsy and the evidence-based management options available for adult patients. The basic assessment should include a thorough history and physical examination as the diagnosis of Bell's palsy is based on exclusion. For confirmed cases of Bell's palsy, corticosteroids are the mainstay of treatment and should be initiated within 72 hours of symptom onset. Antiviral therapy in combination with corticosteroid therapy may confer a small benefit and may be offered on the basis of shared decision making. Currently, no recommendations can be made for acupuncture, physical therapy, electrotherapy or surgical decompression because well-designed studies are lacking and available data are of low quality.
NASA Astrophysics Data System (ADS)
Pozsgay, Victor; Hirsch, Flavien; Branciard, Cyril; Brunner, Nicolas
2017-12-01
We introduce Bell inequalities based on covariance, one of the most common measures of correlation. Explicit examples are discussed, and violations in quantum theory are demonstrated. A crucial feature of these covariance Bell inequalities is their nonlinearity; this has nontrivial consequences for the derivation of their local bound, which is not reached by deterministic local correlations. For our simplest inequality, we derive analytically tight bounds for both local and quantum correlations. An interesting application of covariance Bell inequalities is that they can act as "shared randomness witnesses": specifically, the value of the Bell expression gives device-independent lower bounds on both the dimension and the entropy of the shared random variable in a local model.
Liou, Li-Syue; Chang, Chih-Ya; Chen, Hsuan-Ju; Tseng, Chun-Hung; Chen, Cheng-Yu; Sung, Fung-Chang
2017-01-01
This population-based cohort study investigated the risk of developing peripheral arterial occlusive disease (PAOD) in patients with Bell's palsy. We used longitudinal claims data of health insurance of Taiwan to identify 5,152 patients with Bell's palsy newly diagnosed in 2000-2010 and a control cohort of 20,608 patients without Bell's palsy matched by propensity score. Incidence and hazard ratio (HR) of PAOD were assessed by the end of 2013. The incidence of PAOD was approximately 1.5 times greater in the Bell's palsy group than in the non-Bell's palsy controls (7.75 vs. 4.99 per 1000 person-years). The Cox proportional hazards regression analysis measured adjusted HR was 1.54 (95% confidence interval (CI) = 1.35-1.76) for the Bell's palsy group compared to the non-Bell's palsy group, after adjusting for sex, age, occupation, income and comorbidities. Men were at higher risk of PAOD than women in the Bell's palsy group, but not in the controls. The incidence of PAOD increased with age in both groups, but the Bell's palsy group to control group HR of PAOD decreased as age increased. The systemic steroid treatment reduced 13% of PAOD hazard for Bell's palsy patients, compared to those without the treatment, but not significant. Bell's palsy appears to be associated with an increased risk of developing PAOD. Further pathophysiologic, histopathology and immunologic research is required to explore the underlying biologic mechanism.
Tiltrotor Vibration Reduction Through Higher Harmonic Control
NASA Technical Reports Server (NTRS)
Nixon, Mark W.; Kvaternik, Raymond G.; Settle, T. Ben
1997-01-01
The results of a joint NASA/Army/Bell Helicopter Textron wind-tunnel test to assess the potential of higher harmonic control (HHC) for reducing vibrations in tiltrotor aircraft operating in the airplane mode of flight, and to evaluate the effectiveness of a Bell-developed HHC algorithm called MAVSS (Multipoint Adaptive Vibration Suppression System) are presented. The test was conducted in the Langley Transonic Dynamics Tunnel using an unpowered 1/5-scale semispan aeroelastic model of the V-22 which was modified to incorporate an HHC system employing both the rotor swashplate and the wing flaperon. The effectiveness of the swashplate and the flaperon acting either singly or in combination in reducing IP and 3P wing vibrations over a wide range of tunnel airspeeds and rotor rotational speeds was demonstrated. The MAVSS algorithm was found to be robust to variations in tunnel airspeed and rotor speed, requiring only occasional on-line recalculations of the system transfer matrix. HHC had only a small (usually beneficial) effect on blade loads but increased pitch link loads by 25%. No degradation in aeroelastic stability was noted for any of the conditions tested.
A possible loophole in the theorem of Bell.
Hess, K; Philipp, W
2001-12-04
The celebrated inequalities of Bell are based on the assumption that local hidden parameters exist. When combined with conflicting experimental results, these inequalities appear to prove that local hidden parameters cannot exist. This contradiction suggests to many that only instantaneous action at a distance can explain the Einstein, Podolsky, and Rosen type of experiments. We show that, in addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions that contribute to his being able to obtain the desired contradiction. For instance, Bell assumes that the hidden parameters do not depend on time and are governed by a single probability measure independent of the analyzer settings. We argue that the exclusion of time has neither a physical nor a mathematical basis but is based on Bell's translation of the concept of Einstein locality into the language of probability theory. Our additional set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does not permit Bell-type proofs to go forward.
Fault-tolerant quantum blind signature protocols against collective noise
NASA Astrophysics Data System (ADS)
Zhang, Ming-Hui; Li, Hui-Fang
2016-10-01
This work proposes two fault-tolerant quantum blind signature protocols based on the entanglement swapping of logical Bell states, which are robust against two kinds of collective noises: the collective-dephasing noise and the collective-rotation noise, respectively. Both of the quantum blind signature protocols are constructed from four-qubit decoherence-free (DF) states, i.e., logical Bell qubits. The initial message is encoded on the logical Bell qubits with logical unitary operations, which will not destroy the anti-noise trait of the logical Bell qubits. Based on the fundamental property of quantum entanglement swapping, the receiver simply performs two Bell-state measurements (rather than four-qubit joint measurements) on the logical Bell qubits to verify the signature, which makes the protocols more convenient in a practical application. Different from the existing quantum signature protocols, our protocols can offer the high fidelity of quantum communication with the employment of logical qubits. Moreover, we hereinafter prove the security of the protocols against some individual eavesdropping attacks, and we show that our protocols have the characteristics of unforgeability, undeniability and blindness.
Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm
NASA Astrophysics Data System (ADS)
Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun
2017-12-01
At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases.
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases. PMID:24625699
An Improved Arbitrated Quantum Scheme with Bell States
NASA Astrophysics Data System (ADS)
Zhang, Yingying; Zeng, Jiwen
2018-04-01
In 2014, Liu et al. (In. J. Thero. phys. 53(5); 1569-1579. 2014) proposed an arbitrated quantum signature scheme (Liu'14) with Bell states by utilizing a new quantum one-time pad algorithm. It claimed that it can resist the receiver's existential forgery attack and no party has chances to change the message and its signature without being discovered. Recently, Xu and Zou (In. J. Thero. phys. 55; 4142-4156. 2016) analyzed above scheme and demonstrated that it can't resist the signer's disavowal and the receiver's existential forgery. But, the authors didn't give a method to solve it. In this paper, we will give an improved arbitrated quantum signature scheme to make up the loopholes in Liu'14.
NASA Astrophysics Data System (ADS)
De Raedt, Hans; Michielsen, Kristel; Hess, Karl
2016-12-01
Using Einstein-Podolsky-Rosen-Bohm experiments as an example, we demonstrate that the combination of a digital computer and algorithms, as a metaphor for a perfect laboratory experiment, provides solutions to problems of the foundations of physics. Employing discrete-event simulation, we present a counterexample to John Bell's remarkable "proof" that any theory of physics, which is both Einstein-local and "realistic" (counterfactually definite), results in a strong upper bound to the correlations that are being measured in Einstein-Podolsky-Rosen-Bohm experiments. Our counterexample, which is free of the so-called detection-, coincidence-, memory-, and contextuality loophole, violates this upper bound and fully agrees with the predictions of quantum theory for Einstein-Podolsky-Rosen-Bohm experiments.
BellHouse - a collaboration in ceramics
NASA Astrophysics Data System (ADS)
Johnstone, Rupert; Liggins, Felicity; Buontempo, Carlo; Honnor, Seth; Spencer-Mills, Jocelyn; Newton, Paula; Williams, Emily
2017-04-01
In the Spring of 2016, the UK-based arts organisation Kaleider and the EU-funded FP7 climate services project EUPORIAS made an International Commission Call inviting artists to submit ideas for playable artworks to be debuted at the EUPORIAS General Assembly at the Met Office in October 2016. We received over 60 applications worldwide and were overwhelmed with the quality of ideas. We commissioned Roop Johnstone from RAMP Ceramics to create his exquisite playable artwork - BellHouse. BellHouse is a playful, interactive sound sculpture that translated the non-verbal communication of the delegates presenting at the EUPORIAS General Assembly into the chimes of 35 bells in an opened sided house. A motion capture system devised by the Met Office Informatics Lab activated striking mechanisms associated with each ceramic bell generating a continuous chiming whilst each speaker at the 250 delegate conference presented their research. BellHouse also invited Met Office scientists to interact with it through their work. Some of our favourite data translated into sound included Mt. Etna's volcanic plumes, the European drought of 1976, the solar wind, 250 years of English and Welsh temperature and precipitation anomalies and reanalysis data based on citizen science. Here we present an exploration of the why and how of BellHouse, outlining some of our reflections on its effectiveness alongside its legacy.
Experiments with conjugate gradient algorithms for homotopy curve tracking
NASA Technical Reports Server (NTRS)
Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.
1991-01-01
There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.
Certified randomness in quantum physics.
Acín, Antonio; Masanes, Lluis
2016-12-07
The concept of randomness plays an important part in many disciplines. On the one hand, the question of whether random processes exist is fundamental for our understanding of nature. On the other, randomness is a resource for cryptography, algorithms and simulations. Standard methods for generating randomness rely on assumptions about the devices that are often not valid in practice. However, quantum technologies enable new methods for generating certified randomness, based on the violation of Bell inequalities. These methods are referred to as device-independent because they do not rely on any modelling of the devices. Here we review efforts to design device-independent randomness generators and the associated challenges.
Visual navigation using edge curve matching for pinpoint planetary landing
NASA Astrophysics Data System (ADS)
Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei
2018-05-01
Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.
ERIC Educational Resources Information Center
MacCabe, James H.
2010-01-01
It has long been claimed that there is a strong association between high intelligence, or exceptional creativity, and mental illness. In this book, James MacCabe investigates this claim, using evidence from Swedish population data. He finds evidence that children who achieve either exceptionally high, or very low grades at school, are at greater…
OPC for curved designs in application to photonics on silicon
NASA Astrophysics Data System (ADS)
Orlando, Bastien; Farys, Vincent; Schneider, Loïc.; Cremer, Sébastien; Postnikov, Sergei V.; Millequant, Matthieu; Dirrenberger, Mathieu; Tiphine, Charles; Bayle, Sébastian; Tranquillin, Céline; Schiavone, Patrick
2016-03-01
Today's design for photonics devices on silicon relies on non-Manhattan features such as curves and a wide variety of angles with minimum feature size below 100nm. Industrial manufacturing of such devices requires optimized process window with 193nm lithography. Therefore, Resolution Enhancement Techniques (RET) that are commonly used for CMOS manufacturing are required. However, most RET algorithms are based on Manhattan fragmentation (0°, 45° and 90°) which can generate large CD dispersion on masks for photonic designs. Industrial implementation of RET solutions to photonic designs is challenging as most currently available OPC tools are CMOS-oriented. Discrepancy from design to final results induced by RET techniques can lead to lower photonic device performance. We propose a novel sizing algorithm allowing adjustment of design edge fragments while preserving the topology of the original structures. The results of the algorithm implementation in the rule based sizing, SRAF placement and model based correction will be discussed in this paper. Corrections based on this novel algorithm were applied and characterized on real photonics devices. The obtained results demonstrate the validity of the proposed correction method integrated in Inscale software of Aselta Nanographics.
Measurement Theory Based on the Truth Values Violates Local Realism
NASA Astrophysics Data System (ADS)
Nagata, Koji
2017-02-01
We investigate the violation factor of the Bell-Mermin inequality. Until now, we use an assumption that the results of measurement are ±1. In this case, the maximum violation factor is 2( n-1)/2. The quantum predictions by n-partite Greenberger-Horne-Zeilinger (GHZ) state violate the Bell-Mermin inequality by an amount that grows exponentially with n. Recently, a new measurement theory based on the truth values is proposed (Nagata and Nakamura, Int. J. Theor. Phys. 55:3616, 2016). The values of measurement outcome are either +1 or 0. Here we use the new measurement theory. We consider multipartite GHZ state. It turns out that the Bell-Mermin inequality is violated by the amount of 2( n-1)/2. The measurement theory based on the truth values provides the maximum violation of the Bell-Mermin inequality.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-03
... crosstube) installed on certain Bell Helicopter Textron, Inc. (Bell) and Agusta S.p.A. (Agusta) model helicopters as an approved Bell part installed during production or based on a Supplemental Type Certificate...-0001. Hand Delivery: Deliver to the ``Mail'' address between 9 a.m. and 5 p.m., Monday through Friday...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lue Xing; Sun Kun; Wang Pan
In the framework of Bell-polynomial manipulations, under investigation hereby are three single-field bilinearizable equations: the (1+1)-dimensional shallow water wave model, Boiti-Leon-Manna-Pempinelli model, and (2+1)-dimensional Sawada-Kotera model. Based on the concept of scale invariance, a direct and unifying Bell-polynomial scheme is employed to achieve the Baecklund transformations and Lax pairs associated with those three soliton equations. Note that the Bell-polynomial expressions and Bell-polynomial-typed Baecklund transformations for those three soliton equations can be, respectively, cast into the bilinear equations and bilinear Baecklund transformations with symbolic computation. Consequently, it is also shown that the Bell-polynomial-typed Baecklund transformations can be linearized into the correspondingmore » Lax pairs.« less
Two-step complete polarization logic Bell-state analysis.
Sheng, Yu-Bo; Zhou, Lan
2015-08-26
The Bell state plays a significant role in the fundamental tests of quantum mechanics, such as the nonlocality of the quantum world. The Bell-state analysis is of vice importance in quantum communication. Existing Bell-state analysis protocols usually focus on the Bell-state encoding in the physical qubit directly. In this paper, we will describe an alternative approach to realize the near complete logic Bell-state analysis for the polarized concatenated Greenberger-Horne-Zeilinger (C-GHZ) state with two logic qubits. We show that the logic Bell-state can be distinguished in two steps with the help of the parity-check measurement (PCM) constructed by the cross-Kerr nonlinearity. This approach can be also used to distinguish arbitrary C-GHZ state with N logic qubits. As both the recent theoretical and experiment work showed that the C-GHZ state has its robust feature in practical noisy environment, this protocol may be useful in future long-distance quantum communication based on the logic-qubit entanglement.
Inverse Diffusion Curves Using Shape Optimization.
Zhao, Shuang; Durand, Fredo; Zheng, Changxi
2018-07-01
The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.
Quantum Private Query Based on Bell State and Single Photons
NASA Astrophysics Data System (ADS)
Gao, Xiang; Chang, Yan; Zhang, Shi-Bin; Yang, Fan; Zhang, Yan
2018-03-01
Quantum private query (QPQ) can protect both user's and database holder's privacy. In this paper, we propose a novel quantum private query protocol based on Bell state and single photons. As far as we know, no one has ever proposed the QPQ based on Bell state. By using the decoherence-free (DF) states, our protocol can resist the collective noise. Besides that, our protocol is a one-way quantum protocol, which can resist the Trojan horse attack and reduce the communication complexity. Our protocol can not only guarantee the participants' privacy but also stand against an external eavesdropper.
An extension of the receiver operating characteristic curve and AUC-optimal classification.
Takenouchi, Takashi; Komori, Osamu; Eguchi, Shinto
2012-10-01
While most proposed methods for solving classification problems focus on minimization of the classification error rate, we are interested in the receiver operating characteristic (ROC) curve, which provides more information about classification performance than the error rate does. The area under the ROC curve (AUC) is a natural measure for overall assessment of a classifier based on the ROC curve. We discuss a class of concave functions for AUC maximization in which a boosting-type algorithm including RankBoost is considered, and the Bayesian risk consistency and the lower bound of the optimum function are discussed. A procedure derived by maximizing a specific optimum function has high robustness, based on gross error sensitivity. Additionally, we focus on the partial AUC, which is the partial area under the ROC curve. For example, in medical screening, a high true-positive rate to the fixed lower false-positive rate is preferable and thus the partial AUC corresponding to lower false-positive rates is much more important than the remaining AUC. We extend the class of concave optimum functions for partial AUC optimality with the boosting algorithm. We investigated the validity of the proposed method through several experiments with data sets in the UCI repository.
Quantum communication complexity advantage implies violation of a Bell inequality
Buhrman, Harry; Czekaj, Łukasz; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Markiewicz, Marcin; Speelman, Florian; Strelchuk, Sergii
2016-01-01
We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs. PMID:26957600
Facial paralysis caused by malignant skull base neoplasms.
Marzo, Sam J; Leonetti, John P; Petruzzelli, Guy
2002-12-01
Bell palsy remains the most common cause of facial paralysis. Unfortunately, this term is often erroneously applied to all cases of facial paralysis. The authors performed a retrospective review of data obtained in 11 patients who were treated at a university-based referral practice between July 1988 and September 2001 and who presented with acute facial nerve paralysis mimicking Bell palsy. All patients were subsequently found to harbor an occult skull base neoplasm. A delay in diagnosis was demonstrated in all cases. Seven patients died of their disease, and four patients are currently free of disease. Although Bell palsy remains the most common cause of peripheral facial nerve paralysis, patients in whom neoplasms invade the facial nerve may present with acute paralysis mimicking Bell palsy that fails to resolve. Delays in diagnosis and treatment in such cases may result in increased rates of mortality and morbidity.
Facial paralysis caused by malignant skull base neoplasms.
Marzo, Sam J; Leonetti, John P; Petruzzelli, Guy
2002-05-15
Bell palsy remains the most common cause of facial paralysis. Unfortunately, this term is often erroneously applied to all cases of facial paralysis. The authors performed a retrospective review of data obtained in 11 patients who were treated at a university-based referral practice between July 1988 and September 2001 and who presented with acute facial nerve paralysis mimicking Bell palsy. All patients were subsequently found to harbor an occult skull base neoplasm. A delay in diagnosis was demonstrated in all cases. Seven patients died of their disease, and four patients are currently free of disease. Although Bell palsy remains the most common cause of peripheral facial nerve paralysis, patients in whom neoplasms invade of the facial nerve may present with acute paralysis mimicking Bell palsy that fails to resolve. Delays in diagnosis and treatment in such cases may result in increased rates of mortality and morbidity.
NASA Astrophysics Data System (ADS)
Liu, Zeyu; Xia, Tiecheng; Wang, Jinbo
2018-03-01
We propose a new fractional two-dimensional triangle function combination discrete chaotic map (2D-TFCDM) with the discrete fractional difference. Moreover, the chaos behaviors of the proposed map are observed and the bifurcation diagrams, the largest Lyapunov exponent plot, and the phase portraits are derived, respectively. Finally, with the secret keys generated by Menezes–Vanstone elliptic curve cryptosystem, we apply the discrete fractional map into color image encryption. After that, the image encryption algorithm is analyzed in four aspects and the result indicates that the proposed algorithm is more superior than the other algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61072147 and 11271008).
Bilevel thresholding of sliced image of sludge floc.
Chu, C P; Lee, D J
2004-02-15
This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.
Lee, Jae-Hong; Kim, Do-Hyung; Jeong, Seong-Nyum; Choi, Seong-Ho
2018-04-01
The aim of the current study was to develop a computer-assisted detection system based on a deep convolutional neural network (CNN) algorithm and to evaluate the potential usefulness and accuracy of this system for the diagnosis and prediction of periodontally compromised teeth (PCT). Combining pretrained deep CNN architecture and a self-trained network, periapical radiographic images were used to determine the optimal CNN algorithm and weights. The diagnostic and predictive accuracy, sensitivity, specificity, positive predictive value, negative predictive value, receiver operating characteristic (ROC) curve, area under the ROC curve, confusion matrix, and 95% confidence intervals (CIs) were calculated using our deep CNN algorithm, based on a Keras framework in Python. The periapical radiographic dataset was split into training (n=1,044), validation (n=348), and test (n=348) datasets. With the deep learning algorithm, the diagnostic accuracy for PCT was 81.0% for premolars and 76.7% for molars. Using 64 premolars and 64 molars that were clinically diagnosed as severe PCT, the accuracy of predicting extraction was 82.8% (95% CI, 70.1%-91.2%) for premolars and 73.4% (95% CI, 59.9%-84.0%) for molars. We demonstrated that the deep CNN algorithm was useful for assessing the diagnosis and predictability of PCT. Therefore, with further optimization of the PCT dataset and improvements in the algorithm, a computer-aided detection system can be expected to become an effective and efficient method of diagnosing and predicting PCT.
The Bell Curve Revisited: Testing Controversial Hypotheses with Molecular Genetic Data
Conley, Dalton; Domingue, Benjamin
2017-01-01
In 1994, the publication of Herrnstein’s and Murray’s The Bell Curve resulted in a social science maelstrom of responses. In the present study, we argue that Herrnstein’s and Murray’s assertions were made prematurely, on their own terms, given the lack of data available to test the role of genotype in the dynamics of achievement and attainment in U.S. society. Today, however, the scientific community has access to at least one dataset that is nationally representative and has genome-wide molecular markers. We deploy those data from the Health and Retirement Study in order to test the core series of propositions offered by Herrnstein and Murray in 1994. First, we ask whether the effect of genotype is increasing in predictive power across birth cohorts in the middle twentieth century. Second, we ask whether assortative mating on relevant genotypes is increasing across the same time period. Finally, we ask whether educational genotypes are increasingly predictive of fertility (number ever born [NEB]) in tandem with the rising (negative) association of educational outcomes and NEB. The answers to these questions are mostly no; while molecular genetic markers can predict educational attainment, we find little evidence for the proposition that we are becoming increasingly genetically stratified. PMID:29130056
Doi, Kent; Hu, Xuzhen; Yuen, Peter S.T.; Leelahavanichkul, Asada; Yasuda, Hideo; Kim, Soo Mi; Schnermann, Jürgen; Jonassen, Thomas E.N.; Frøkiær, Jørgen; Nielsen, Søren; Star, Robert A.
2008-01-01
Sepsis remains a serious problem in critically ill patients with the mortality increasing to over half when there is attendant acute kidney injury. α-Melanocyte-stimulating hormone is a potent anti-inflammatory cytokine that inhibits many forms of inflammation including that with acute kidney injury. We tested whether a new α-melanocyte-stimulating hormone analogue (AP214), which has increased binding affinity to melanocortin receptors, improves sepsis-induced kidney injury and mortality using a cecal ligation and puncture mouse model. In the lethal cecal ligation-puncture model of sepsis, severe hypotension and bradycardia resulted and AP214 attenuated acute kidney injury of the lethal model with a bell-shaped dose-response curve. An optimum AP214 dose reduced acute kidney injury even when it was administered 6 hr after surgery and it significantly improved blood pressure and heart rate. AP214 reduced serum TNF-α and IL-10 levels with a bell-shaped dose-response curve. Additionally; NF-κB activation in the kidney and spleen, and splenocyte apoptosis were decreased by the treatment. AP214 significantly improved survival in both lethal and sublethal models. We have shown that AP214 improves hemodynamic failure, acute kidney injury, mortality and splenocyte apoptosis attenuating pro- and anti-inflammatory actions due to sepsis. PMID:18354376
NASA Astrophysics Data System (ADS)
Weber, M. E.; Reichelt, L.; Kuhn, G.; Thurow, J. W.; Ricken, W.
2009-12-01
We present software-based tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at ultrahigh (pixel) resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a Gaussian smoothed gray-scale curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the gray-scale curve through a wide moving average. Hence, the record is separated into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the gray-scale curve into its frequency components, before positive and negative passages are count. We applied the new methods successfully to tree rings and to well-dated and already manually counted marine varves from Saanich Inlet before we adopted the tools to rather complex marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that the laminations from three Weddell Sea sites represent true varves that were deposited on sediment ridges over several millennia during the last glacial maximum (LGM). There are apparently two seasonal layers of terrigenous composition, a coarser-grained bright layer, and a finer-grained dark layer. The new tools offer several advantages over previous tools. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Since PEAK associates counts with a specific depth, the thickness of each year or each season is also measured which is an important prerequisite for later spectral analysis. Since all information required to conduct the analysis is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.
Application of the Trend Filtering Algorithm for Photometric Time Series Data
NASA Astrophysics Data System (ADS)
Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.
2016-08-01
Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.
Extraction of Capillary Non-perfusion from Fundus Fluorescein Angiogram
NASA Astrophysics Data System (ADS)
Sivaswamy, Jayanthi; Agarwal, Amit; Chawla, Mayank; Rani, Alka; Das, Taraprasad
Capillary Non-Perfusion (CNP) is a condition in diabetic retinopathy where blood ceases to flow to certain parts of the retina, potentially leading to blindness. This paper presents a solution for automatically detecting and segmenting CNP regions from fundus fluorescein angiograms (FFAs). CNPs are modelled as valleys, and a novel technique based on extrema pyramid is presented for trough-based valley detection. The obtained valley points are used to segment the desired CNP regions by employing a variance-based region growing scheme. The proposed algorithm has been tested on 40 images and validated against expert-marked ground truth. In this paper, we present results of testing and validation of our algorithm against ground truth and compare the segmentation performance against two others methods.The performance of the proposed algorithm is presented as a receiver operating characteristic (ROC) curve. The area under this curve is 0.842 and the distance of ROC from the ideal point (0,1) is 0.31. The proposed method for CNP segmentation was found to outperform the watershed [1] and heat-flow [2] based methods.
Craniofacial Reconstruction Using Rational Cubic Ball Curves
Majeed, Abdul; Mt Piah, Abd Rahni; Gobithaasan, R. U.; Yahya, Zainor Ridzuan
2015-01-01
This paper proposes the reconstruction of craniofacial fracture using rational cubic Ball curve. The idea of choosing Ball curve is based on its robustness of computing efficiency over Bezier curve. The main steps are conversion of Digital Imaging and Communications in Medicine (Dicom) images to binary images, boundary extraction and corner point detection, Ball curve fitting with genetic algorithm and final solution conversion to Dicom format. The last section illustrates a real case of craniofacial reconstruction using the proposed method which clearly indicates the applicability of this method. A Graphical User Interface (GUI) has also been developed for practical application. PMID:25880632
A trace map comparison algorithm for the discrete fracture network models of rock masses
NASA Astrophysics Data System (ADS)
Han, Shuai; Wang, Gang; Li, Mingchao
2018-06-01
Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.
NASA Astrophysics Data System (ADS)
Kota, Sujatha; Padmanabhuni, Venkata Nageswara Rao; Budda, Kishor; K, Sruthi
2018-05-01
Elliptic Curve Cryptography (ECC) uses two keys private key and public key and is considered as a public key cryptographic algorithm that is used for both authentication of a person and confidentiality of data. Either one of the keys is used in encryption and other in decryption depending on usage. Private key is used in encryption by the user and public key is used to identify user in the case of authentication. Similarly, the sender encrypts with the private key and the public key is used to decrypt the message in case of confidentiality. Choosing the private key is always an issue in all public key Cryptographic Algorithms such as RSA, ECC. If tiny values are chosen in random the security of the complete algorithm becomes an issue. Since the Public key is computed based on the Private Key, if they are not chosen optimally they generate infinity values. The proposed Modified Elliptic Curve Cryptography uses selection in either of the choices; the first option is by using Particle Swarm Optimization and the second option is by using Cuckoo Search Algorithm for randomly choosing the values. The proposed algorithms are developed and tested using sample database and both are found to be secured and reliable. The test results prove that the private key is chosen optimally not repetitive or tiny and the computations in public key will not reach infinity.
Rayleigh wave dispersion curve inversion by using particle swarm optimization and genetic algorithm
NASA Astrophysics Data System (ADS)
Buyuk, Ersin; Zor, Ekrem; Karaman, Abdullah
2017-04-01
Inversion of surface wave dispersion curves with its highly nonlinear nature has some difficulties using traditional linear inverse methods due to the need and strong dependence to the initial model, possibility of trapping in local minima and evaluation of partial derivatives. There are some modern global optimization methods to overcome of these difficulties in surface wave analysis such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). GA is based on biologic evolution consisting reproduction, crossover and mutation operations, while PSO algorithm developed after GA is inspired from the social behaviour of birds or fish of swarms. Utility of these methods require plausible convergence rate, acceptable relative error and optimum computation cost that are important for modelling studies. Even though PSO and GA processes are similar in appearence, the cross-over operation in GA is not used in PSO and the mutation operation is a stochastic process for changing the genes within chromosomes in GA. Unlike GA, the particles in PSO algorithm changes their position with logical velocities according to particle's own experience and swarm's experience. In this study, we applied PSO algorithm to estimate S wave velocities and thicknesses of the layered earth model by using Rayleigh wave dispersion curve and also compared these results with GA and we emphasize on the advantage of using PSO algorithm for geophysical modelling studies considering its rapid convergence, low misfit error and computation cost.
Vargas-Rodriguez, Everardo; Guzman-Chavez, Ana Dinora; Baeza-Serrato, Roberto
2018-06-04
In this work, a novel tailored algorithm to enhance the overall sensitivity of gas concentration sensors based on the Direct Absorption Tunable Laser Absorption Spectroscopy (DA-ATLAS) method is presented. By using this algorithm, the sensor sensitivity can be custom-designed to be quasi constant over a much larger dynamic range compared with that obtained by typical methods based on a single statistics feature of the sensor signal output (peak amplitude, area under the curve, mean or RMS). Additionally, it is shown that with our algorithm, an optimal function can be tailored to get a quasi linear relationship between the concentration and some specific statistics features over a wider dynamic range. In order to test the viability of our algorithm, a basic C 2 H 2 sensor based on DA-ATLAS was implemented, and its experimental measurements support the simulated results provided by our algorithm.
Refined genetic algorithm -- Economic dispatch example
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheble, G.B.; Brittig, K.
1995-02-01
A genetic-based algorithm is used to solve an economic dispatch (ED) problem. The algorithm utilizes payoff information of perspective solutions to evaluate optimality. Thus, the constraints of classical LaGrangian techniques on unit curves are eliminated. Using an economic dispatch problem as a basis for comparison, several different techniques which enhance program efficiency and accuracy, such as mutation prediction, elitism, interval approximation and penalty factors, are explored. Two unique genetic algorithms are also compared. The results are verified for a sample problem using a classical technique.
Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Graves, Yan Jiang; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve
2013-12-21
Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine.
NASA Technical Reports Server (NTRS)
Kvaternik, Raymond G.; Piatak, David J.; Nixon, Mark W.; Langston, Chester W.; Singleton, Jeffrey D.; Bennett, Richard L.; Brown, Ross K.
2001-01-01
The results of a joint NASA/Army/Bell Helicopter Textron wind-tunnel test to assess the potential of Generalized Predictive Control (GPC) for actively controlling the swashplate of tiltrotor aircraft to enhance aeroelastic stability in the airplane mode of flight are presented. GPC is an adaptive time-domain predictive control method that uses a linear difference equation to describe the input-output relationship of the system and to design the controller. The test was conducted in the Langley Transonic Dynamics Tunnel using an unpowered 1/5-scale semispan aeroelastic model of the V-22 that was modified to incorporate a GPC-based multi-input multi-output control algorithm to individually control each of the three swashplate actuators. Wing responses were used for feedback. The GPC-based control system was highly effective in increasing the stability of the critical wing mode for all of the conditions tested, without measurable degradation of the damping in the other modes. The algorithm was also robust with respect to its performance in adjusting to rapid changes in both the rotor speed and the tunnel airspeed.
Fractal based curves in musical creativity: A critical annotation
NASA Astrophysics Data System (ADS)
Georgaki, Anastasia; Tsolakis, Christos
In this article we examine fractal curves and synthesis algorithms in musical composition and research. First we trace the evolution of different approaches for the use of fractals in music since the 80's by a literature review. Furthermore, we review representative fractal algorithms and platforms that implement them. Properties such as self-similarity (pink noise), correlation, memory (related to the notion of Brownian motion) or non correlation at multiple levels (white noise), can be used to develop hierarchy of criteria for analyzing different layers of musical structure. L-systems can be applied in the modelling of melody in different musical cultures as well as in the investigation of musical perception principles. Finally, we propose a critical investigation approach for the use of artificial or natural fractal curves in systematic musicology.
The dynamic micro computed tomography at SSRF
NASA Astrophysics Data System (ADS)
Chen, R.; Xu, L.; Du, G.; Deng, B.; Xie, H.; Xiao, T.
2018-05-01
Synchrotron radiation micro-computed tomography (SR-μCT) is a critical technique for quantitative characterizing the 3D internal structure of samples, recently the dynamic SR-μCT has been attracting vast attention since it can evaluate the three-dimensional structure evolution of a sample. A dynamic μCT method, which is based on monochromatic beam, was developed at the X-ray Imaging and Biomedical Application Beamline at Shanghai Synchrotron Radiation Facility, by combining the compressed sensing based CT reconstruction algorithm and hardware upgrade. The monochromatic beam based method can achieve quantitative information, and lower dose than the white beam base method in which the lower energy beam is absorbed by the sample rather than contribute to the final imaging signal. The developed method is successfully used to investigate the compression of the air sac during respiration in a bell cricket, providing new knowledge for further research on the insect respiratory system.
A framework for porting the NeuroBayes machine learning algorithm to FPGAs
NASA Astrophysics Data System (ADS)
Baehr, S.; Sander, O.; Heck, M.; Feindt, M.; Becker, J.
2016-01-01
The NeuroBayes machine learning algorithm is deployed for online data reduction at the pixel detector of Belle II. In order to test, characterize and easily adapt its implementation on FPGAs, a framework was developed. Within the framework an HDL model, written in python using MyHDL, is used for fast exploration of possible configurations. Under usage of input data from physics simulations figures of merit like throughput, accuracy and resource demand of the implementation are evaluated in a fast and flexible way. Functional validation is supported by usage of unit tests and HDL simulation for chosen configurations.
Kinetics of distribution of infections in networks
NASA Astrophysics Data System (ADS)
Avramov, I.
2007-06-01
SummaryWe develop a model for disease spreading in networks in a manner similar to the kinetics of crystallization of undercooled melts. The same kind of equations can be used in ecology and in sociology studies. For instance, they control the spread of gossip among the population. The time t dependence of the overall fraction α( t) of an infected network mass (individuals) affected by the disease is represented by an S-shaped curve. The derivative, i.e. the time dependence of intensity W( t) with which the epidemic evolves, is a bell-shaped curve. In essence, an analytical solution is offered describing the kinetics of spread of information along a ( d-dimensional) network.
Physical therapy for facial paralysis: a tailored treatment approach.
Brach, J S; VanSwearingen, J M
1999-04-01
Bell palsy is an acute facial paralysis of unknown etiology. Although recovery from Bell palsy is expected without intervention, clinical experience suggests that recovery is often incomplete. This case report describes a classification system used to guide treatment and to monitor recovery of an individual with facial paralysis. The patient was a 71-year-old woman with complete left facial paralysis secondary to Bell palsy. Signs and symptoms were assessed using a standardized measure of facial impairment (Facial Grading System [FGS]) and questions regarding functional limitations. A treatment-based category was assigned based on signs and symptoms. Rehabilitation involved muscle re-education exercises tailored to the treatment-based category. In 14 physical therapy sessions over 13 months, the patient had improved facial impairments (initial FGS score= 17/100, final FGS score= 68/100) and no reported functional limitations. Recovery from Bell palsy can be a complicated and lengthy process. The use of a classification system may help simplify the rehabilitation process.
Zhou, Rui; Sun, Jinping; Hu, Yuxin; Qi, Yaolong
2018-01-31
Synthetic aperture radar (SAR) equipped on the hypersonic air vehicle in near space has many advantages over the conventional airborne SAR. However, its high-speed maneuvering characteristics with curved trajectory result in serious range migration, and exacerbate the contradiction between the high resolution and wide swath. To solve this problem, this paper establishes the imaging geometrical model matched with the flight trajectory of the hypersonic platform and the multichannel azimuth sampling model based on the displaced phase center antenna (DPCA) technology. Furthermore, based on the multichannel signal reconstruction theory, a more efficient spectrum reconstruction model using discrete Fourier transform is proposed to obtain the azimuth uniform sampling data. Due to the high complexity of the slant range model, it is difficult to deduce the processing algorithm for SAR imaging. Thus, an approximate range model is derived based on the minimax criterion, and the optimal second-order approximate coefficients of cosine function are obtained using the two-population coevolutionary algorithm. On this basis, aiming at the problem that the traditional Omega-K algorithm cannot compensate the residual phase with the difficulty of Stolt mapping along the range frequency axis, this paper proposes an Exact Transfer Function (ETF) algorithm for SAR imaging, and presents a method of range division to achieve wide swath imaging. Simulation results verify the effectiveness of the ETF imaging algorithm.
Zhou, Rui; Hu, Yuxin; Qi, Yaolong
2018-01-01
Synthetic aperture radar (SAR) equipped on the hypersonic air vehicle in near space has many advantages over the conventional airborne SAR. However, its high-speed maneuvering characteristics with curved trajectory result in serious range migration, and exacerbate the contradiction between the high resolution and wide swath. To solve this problem, this paper establishes the imaging geometrical model matched with the flight trajectory of the hypersonic platform and the multichannel azimuth sampling model based on the displaced phase center antenna (DPCA) technology. Furthermore, based on the multichannel signal reconstruction theory, a more efficient spectrum reconstruction model using discrete Fourier transform is proposed to obtain the azimuth uniform sampling data. Due to the high complexity of the slant range model, it is difficult to deduce the processing algorithm for SAR imaging. Thus, an approximate range model is derived based on the minimax criterion, and the optimal second-order approximate coefficients of cosine function are obtained using the two-population coevolutionary algorithm. On this basis, aiming at the problem that the traditional Omega-K algorithm cannot compensate the residual phase with the difficulty of Stolt mapping along the range frequency axis, this paper proposes an Exact Transfer Function (ETF) algorithm for SAR imaging, and presents a method of range division to achieve wide swath imaging. Simulation results verify the effectiveness of the ETF imaging algorithm. PMID:29385059
ERIC Educational Resources Information Center
Nogler, Tracey A.
2017-01-01
The purpose of this quantitative causal-comparative research was to examine if and to what extent there were differences in students' cognitive load and the subsequent academic performance based on block bell schedule and traditional bell schedule for freshmen in Algebra 1 in the Southwestern United States. This study included students from two…
Baecklund transformation, Lax pair, and solutions for the Caudrey-Dodd-Gibbon equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qu Qixing; Sun Kun; Jiang Yan
2011-01-15
By using Bell polynomials and symbolic computation, we investigate the Caudrey-Dodd-Gibbon equation analytically. Through a generalization of Bells polynomials, its bilinear form is derived, based on which, the periodic wave solution and soliton solutions are presented. And the soliton solutions with graphic analysis are also given. Furthermore, Baecklund transformation and Lax pair are derived via the Bells exponential polynomials. Finally, the Ablowitz-Kaup-Newell-Segur system is constructed.
Jiang, Junfeng; Wang, Shaohua; Liu, Tiegen; Liu, Kun; Yin, Jinde; Meng, Xiange; Zhang, Yimo; Wang, Shuang; Qin, Zunqi; Wu, Fan; Li, Dingjie
2012-07-30
A demodulation algorithm based on absolute phase recovery of a selected monochromatic frequency is proposed for optical fiber Fabry-Perot pressure sensing system. The algorithm uses Fourier transform to get the relative phase and intercept of the unwrapped phase-frequency linear fit curve to identify its interference-order, which are then used to recover the absolute phase. A simplified mathematical model of the polarized low-coherence interference fringes was established to illustrate the principle of the proposed algorithm. Phase unwrapping and the selection of monochromatic frequency were discussed in detail. Pressure measurement experiment was carried out to verify the effectiveness of the proposed algorithm. Results showed that the demodulation precision by our algorithm could reach up to 0.15kPa, which has been improved by 13 times comparing with phase slope based algorithm.
Postselection-Loophole-Free Bell Test Over an Installed Optical Fiber Network.
Carvacho, Gonzalo; Cariñe, Jaime; Saavedra, Gabriel; Cuevas, Álvaro; Fuenzalida, Jorge; Toledo, Felipe; Figueroa, Miguel; Cabello, Adán; Larsson, Jan-Åke; Mataloni, Paolo; Lima, Gustavo; Xavier, Guilherme B
2015-07-17
Device-independent quantum communication will require a loophole-free violation of Bell inequalities. In typical scenarios where line of sight between the communicating parties is not available, it is convenient to use energy-time entangled photons due to intrinsic robustness while propagating over optical fibers. Here we show an energy-time Clauser-Horne-Shimony-Holt Bell inequality violation with two parties separated by 3.7 km over the deployed optical fiber network belonging to the University of Concepción in Chile. Remarkably, this is the first Bell violation with spatially separated parties that is free of the postselection loophole, which affected all previous in-field long-distance energy-time experiments. Our work takes a further step towards a fiber-based loophole-free Bell test, which is highly desired for secure quantum communication due to the widespread existing telecommunication infrastructure.
Postselection-Loophole-Free Bell Test Over an Installed Optical Fiber Network
NASA Astrophysics Data System (ADS)
Carvacho, Gonzalo; Cariñe, Jaime; Saavedra, Gabriel; Cuevas, Álvaro; Fuenzalida, Jorge; Toledo, Felipe; Figueroa, Miguel; Cabello, Adán; Larsson, Jan-Åke; Mataloni, Paolo; Lima, Gustavo; Xavier, Guilherme B.
2015-07-01
Device-independent quantum communication will require a loophole-free violation of Bell inequalities. In typical scenarios where line of sight between the communicating parties is not available, it is convenient to use energy-time entangled photons due to intrinsic robustness while propagating over optical fibers. Here we show an energy-time Clauser-Horne-Shimony-Holt Bell inequality violation with two parties separated by 3.7 km over the deployed optical fiber network belonging to the University of Concepción in Chile. Remarkably, this is the first Bell violation with spatially separated parties that is free of the postselection loophole, which affected all previous in-field long-distance energy-time experiments. Our work takes a further step towards a fiber-based loophole-free Bell test, which is highly desired for secure quantum communication due to the widespread existing telecommunication infrastructure.
NASA Astrophysics Data System (ADS)
Zhang, Xianxia; Wang, Jian; Qin, Tinggao
2003-09-01
Intelligent control algorithms are introduced into the control system of temperature and humidity. A multi-mode control algorithm of PI-Single Neuron is proposed for single loop control of temperature and humidity. In order to remove the coupling between temperature and humidity, a new decoupling method is presented, which is called fuzzy decoupling. The decoupling is achieved by using a fuzzy controller that dynamically modifies the static decoupling coefficient. Taking the control algorithm of PI-Single Neuron as the single loop control of temperature and humidity, the paper provides the simulated output response curves with no decoupling control, static decoupling control and fuzzy decoupling control. Those control algorithms are easily implemented in singlechip-based hardware systems.
NASA Astrophysics Data System (ADS)
Taherkhani, Mohammand Amin; Navi, Keivan; Van Meter, Rodney
2018-01-01
Quantum aided Byzantine agreement is an important distributed quantum algorithm with unique features in comparison to classical deterministic and randomized algorithms, requiring only a constant expected number of rounds in addition to giving a higher level of security. In this paper, we analyze details of the high level multi-party algorithm, and propose elements of the design for the quantum architecture and circuits required at each node to run the algorithm on a quantum repeater network (QRN). Our optimization techniques have reduced the quantum circuit depth by 44% and the number of qubits in each node by 20% for a minimum five-node setup compared to the design based on the standard arithmetic circuits. These improvements lead to a quantum system architecture with 160 qubits per node, space-time product (an estimate of the required fidelity) {KQ}≈ 1.3× {10}5 per node and error threshold 1.1× {10}-6 for the total nodes in the network. The evaluation of the designed architecture shows that to execute the algorithm once on the minimum setup, we need to successfully distribute a total of 648 Bell pairs across the network, spread evenly between all pairs of nodes. This framework can be considered a starting point for establishing a road-map for light-weight demonstration of a distributed quantum application on QRNs.
Electromagnetic Induction Spectroscopy for the Detection of Subsurface Targets
2012-12-01
curves of the proposed method and that of Fails et al.. For the kNN ROC curve, k = 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81...et al. [6] and Ramachandran et al. [7] both demonstrated success in detecting mines using the k-nearest-neighbor ( kNN ) algorithm based on the EMI...error is also included in the feature vector. The kNN labels an unknown target based on the closest targets in a training set. Collins et al. [2] and
Infrared traffic image enhancement algorithm based on dark channel prior and gamma correction
NASA Astrophysics Data System (ADS)
Zheng, Lintao; Shi, Hengliang; Gu, Ming
2017-07-01
The infrared traffic image acquired by the intelligent traffic surveillance equipment has low contrast, little hierarchical differences in perceptions of image and the blurred vision effect. Therefore, infrared traffic image enhancement, being an indispensable key step, is applied to nearly all infrared imaging based traffic engineering applications. In this paper, we propose an infrared traffic image enhancement algorithm that is based on dark channel prior and gamma correction. In existing research dark channel prior, known as a famous image dehazing method, here is used to do infrared image enhancement for the first time. Initially, in the proposed algorithm, the original degraded infrared traffic image is transformed with dark channel prior as the initial enhanced result. A further adjustment based on the gamma curve is needed because initial enhanced result has lower brightness. Comprehensive validation experiments reveal that the proposed algorithm outperforms the current state-of-the-art algorithms.
On the dynamics of jellyfish locomotion via 3D particle tracking velocimetry
NASA Astrophysics Data System (ADS)
Piper, Matthew; Kim, Jin-Tae; Chamorro, Leonardo P.
2016-11-01
The dynamics of jellyfish (Aurelia aurita) locomotion is experimentally studied via 3D particle tracking velocimetry. 3D locations of the bell tip are tracked over 1.5 cycles to describe the jellyfish path. Multiple positions of the jellyfish bell margin are initially tracked in 2D from four independent planes and individually projected in 3D based on the jellyfish path and geometrical properties of the setup. A cubic spline interpolation and the exponentially weighted moving average are used to estimate derived quantities, including velocity and acceleration of the jellyfish locomotion. We will discuss distinctive features of the jellyfish 3D motion at various swimming phases, and will provide insight on the 3D contraction and relaxation in terms of the locomotion, the steadiness of the bell margin eccentricity, and local Reynolds number based on the instantaneous mean diameter of the bell.
Fast and Exact Continuous Collision Detection with Bernstein Sign Classification
Tang, Min; Tong, Ruofeng; Wang, Zhendong; Manocha, Dinesh
2014-01-01
We present fast algorithms to perform accurate CCD queries between triangulated models. Our formulation uses properties of the Bernstein basis and Bézier curves and reduces the problem to evaluating signs of polynomials. We present a geometrically exact CCD algorithm based on the exact geometric computation paradigm to perform reliable Boolean collision queries. Our algorithm is more than an order of magnitude faster than prior exact algorithms. We evaluate its performance for cloth and FEM simulations on CPUs and GPUs, and highlight the benefits. PMID:25568589
Violation of Bell inequalities for arbitrary-dimensional bipartite systems
NASA Astrophysics Data System (ADS)
Yang, Yanmin; Zheng, Zhu-Jun
2018-01-01
In this paper, we consider the violation of Bell inequalities for quantum system C^K⊗ C^K (integer K≥2) with group theoretical method. For general M possible measurements, and each measurement with K outcomes, the Bell inequalities based on the choice of two orbits are derived. When the observables are much enough, the quantum bounds are only dependent on M and approximate to the classical bounds. Moreover, the corresponding nonlocal games with two different scenarios are analyzed.
Metillo, Ephrime B; Ritz, David A
2003-02-01
Three mysid species showed differences in chemosensory feeding as judged from stereotyped food capturing responses to dissolved mixtures of feeding stimulant (either betaine-HCl or glycine) and suppressant (ammonium). The strongest responses were to 50:50 mixtures of both betaine-ammonium and glycine-ammonium solutions. In general, the response curve to the different mixtures tested was bell-shaped. Anisomysis mixta australis only showed the normal curve in response to the glycine-ammonium mixture. The platykurtic curve for Tenagomysis tasmaniae suggests a less optimal response to the betaine-HCl-ammonium solution. Paramesopodopsis rufa reacted more strongly to the betaine-ammonium than to the glycine-ammonium solutions, and more individuals of this species responded to both solutions than the other two species. It is suggested that these contrasting chemosensitivities of the three coexisting mysid species serve as a means of partitioning the feeding niche.
Maneuver Acoustic Flight Test of the Bell 430 Helicopter
NASA Technical Reports Server (NTRS)
Watts, Michael E.; Snider, Royce; Greenwood, Eric; Baden, Joel
2012-01-01
A cooperative flight test by NASA, Bell Helicopter and the U.S. Army to characterize the steady state acoustics and measure the maneuver noise of a Bell Helicopter 430 aircraft was accomplished. The test occurred during June/July, 2011 at Eglin Air Force Base, Florida. This test gathered a total of 410 data points over 10 test days and compiled an extensive data base of dynamic maneuver measurements. Three microphone configurations with up to 31 microphones in each configuration were used to acquire acoustic data. Aircraft data included DGPS, aircraft state and rotor state information. This paper provides an overview of the test.
W-curve alignments for HIV-1 genomic comparisons.
Cork, Douglas J; Lembark, Steven; Tovanabutra, Sodsai; Robb, Merlin L; Kim, Jerome H
2010-06-01
The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly. We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison. The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE. Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison technique of aligning extremes of the curves to effectively phase-shift them past the HIV-1 gap problem, is presented. Besides yielding similar neighbor-joining phenogram topologies, most Mother and Infant C2-V5 sequences in the cohort pairs geometrically map closest to each other, indicating that W-curve heuristics overcame any gap problem.
The SIST-M: Predictive validity of a brief structured Clinical Dementia Rating interview
Okereke, Olivia I.; Pantoja-Galicia, Norberto; Copeland, Maura; Hyman, Bradley T.; Wanggaard, Taylor; Albert, Marilyn S.; Betensky, Rebecca A.; Blacker, Deborah
2011-01-01
Background We previously established reliability and cross-sectional validity of the SIST-M (Structured Interview and Scoring Tool–Massachusetts Alzheimer's Disease Research Center), a shortened version of an instrument shown to predict progression to Alzheimer disease (AD), even among persons with very mild cognitive impairment (vMCI). Objective To test predictive validity of the SIST-M. Methods Participants were 342 community-dwelling, non-demented older adults in a longitudinal study. Baseline Clinical Dementia Rating (CDR) ratings were determined by either: 1) clinician interviews or 2) a previously developed computer algorithm based on 60 questions (of a possible 131) extracted from clinician interviews. We developed age+gender+education-adjusted Cox proportional hazards models using CDR-sum-of-boxes (CDR-SB) as the predictor, where CDR-SB was determined by either clinician interview or algorithm; models were run for the full sample (n=342) and among those jointly classified as vMCI using clinician- and algorithm-based CDR ratings (n=156). We directly compared predictive accuracy using time-dependent Receiver Operating Characteristic (ROC) curves. Results AD hazard ratios (HRs) were similar for clinician-based and algorithm-based CDR-SB: for a 1-point increment in CDR-SB, respective HRs (95% CI)=3.1 (2.5,3.9) and 2.8 (2.2,3.5); among those with vMCI, respective HRs (95% CI) were 2.2 (1.6,3.2) and 2.1 (1.5,3.0). Similarly high predictive accuracy was achieved: the concordance probability (weighted average of the area-under-the-ROC curves) over follow-up was 0.78 vs. 0.76 using clinician-based vs. algorithm-based CDR-SB. Conclusion CDR scores based on items from this shortened interview had high predictive ability for AD – comparable to that using a lengthy clinical interview. PMID:21986342
Vazquez-Leal, H.; Jimenez-Fernandez, V. M.; Benhammouda, B.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Ramirez-Pinero, A.; Marin-Hernandez, A.; Huerta-Chua, J.
2014-01-01
We present a homotopy continuation method (HCM) for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL) representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation. PMID:25184157
Experimental violation of multipartite Bell inequalities with trapped ions.
Lanyon, B P; Zwerger, M; Jurcevic, P; Hempel, C; Dür, W; Briegel, H J; Blatt, R; Roos, C F
2014-03-14
We report on the experimental violation of multipartite Bell inequalities by entangled states of trapped ions. First, we consider resource states for measurement-based quantum computation of between 3 and 7 ions and show that all strongly violate a Bell-type inequality for graph states, where the criterion for violation is a sufficiently high fidelity. Second, we analyze Greenberger-Horne-Zeilinger states of up to 14 ions generated in a previous experiment using stronger Mermin-Klyshko inequalities, and show that in this case the violation of local realism increases exponentially with system size. These experiments represent a violation of multipartite Bell-type inequalities of deterministically prepared entangled states. In addition, the detection loophole is closed.
NASA Astrophysics Data System (ADS)
Tavakoli, Armin; Żukowski, Marek
2017-04-01
Communication complexity problems (CCPs) are tasks in which separated parties attempt to compute a function whose inputs are distributed among the parties. Their communication is limited so that not all inputs can be sent. We show that broad classes of Bell inequalities can be mapped to CCPs and that a quantum violation of a Bell inequality is a necessary and sufficient condition for an enhancement of the related CCP beyond its classical limitation. However, one can implement CCPs by transmitting a quantum system, encoding no more information than is allowed in the CCP, and extracting information by performing measurements. We show that for a large class of Bell inequalities, the improvement of the CCP associated with a quantum violation of a Bell inequality can be no greater than the improvement obtained from quantum prepare-transmit-measure strategies.
Near-equilibrium dumb-bell-shaped figures for cohesionless small bodies
NASA Astrophysics Data System (ADS)
Descamps, Pascal
2016-02-01
In a previous paper (Descamps, P. [2015]. Icarus 245, 64-79), we developed a specific method aimed to retrieve the main physical characteristics (shape, density, surface scattering properties) of highly elongated bodies from their rotational lightcurves through the use of dumb-bell-shaped equilibrium figures. The present work is a test of this method. For that purpose we introduce near-equilibrium dumb-bell-shaped figures which are base dumb-bell equilibrium shapes modulated by lognormal statistics. Such synthetic irregular models are used to generate lightcurves from which our method is successfully applied. Shape statistical parameters of such near-equilibrium dumb-bell-shaped objects are in good agreement with those calculated for example for the Asteroid (216) Kleopatra from its dog-bone radar model. It may suggest that such bilobed and elongated asteroids can be approached by equilibrium figures perturbed be the interplay with a substantial internal friction modeled by a Gaussian random sphere.
Violation of local realism with freedom of choice.
Scheidl, Thomas; Ursin, Rupert; Kofler, Johannes; Ramelow, Sven; Ma, Xiao-Song; Herbst, Thomas; Ratschbacher, Lothar; Fedrizzi, Alessandro; Langford, Nathan K; Jennewein, Thomas; Zeilinger, Anton
2010-11-16
Bell's theorem shows that local realistic theories place strong restrictions on observable correlations between different systems, giving rise to Bell's inequality which can be violated in experiments using entangled quantum states. Bell's theorem is based on the assumptions of realism, locality, and the freedom to choose between measurement settings. In experimental tests, "loopholes" arise which allow observed violations to still be explained by local realistic theories. Violating Bell's inequality while simultaneously closing all such loopholes is one of the most significant still open challenges in fundamental physics today. In this paper, we present an experiment that violates Bell's inequality while simultaneously closing the locality loophole and addressing the freedom-of-choice loophole, also closing the latter within a reasonable set of assumptions. We also explain that the locality and freedom-of-choice loopholes can be closed only within nondeterminism, i.e., in the context of stochastic local realism.
Lane detection based on color probability model and fuzzy clustering
NASA Astrophysics Data System (ADS)
Yu, Yang; Jo, Kang-Hyun
2018-04-01
In the vehicle driver assistance systems, the accuracy and speed of lane line detection are the most important. This paper is based on color probability model and Fuzzy Local Information C-Means (FLICM) clustering algorithm. The Hough transform and the constraints of structural road are used to detect the lane line accurately. The global map of the lane line is drawn by the lane curve fitting equation. The experimental results show that the algorithm has good robustness.
NASA Astrophysics Data System (ADS)
van Rheenen, Arthur D.; Taule, Petter; Thomassen, Jan Brede; Madsen, Eirik Blix
2018-04-01
We present Minimum-Resolvable Temperature Difference (MRTD) curves obtained by letting an ensemble of observers judge how many of the six four-bar patterns they can "see" in a set of images taken with different bar-to-background contrasts. The same images are analyzed using elemental signal analysis algorithms and machine-analysis based MRTD curves are obtained. We show that by adjusting the minimum required signal-to-noise ratio the machine-based MRTDs are very similar to the ones obtained with the help of the human observers.
Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.
Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan
2016-09-24
This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.
Impact adding bifurcation in an autonomous hybrid dynamical model of church bell
NASA Astrophysics Data System (ADS)
Brzeski, P.; Chong, A. S. E.; Wiercigroch, M.; Perlikowski, P.
2018-05-01
In this paper we present the bifurcation analysis of the yoke-bell-clapper system which corresponds to the biggest bell "Serce Lodzi" mounted in the Cathedral Basilica of St Stanislaus Kostka, Lodz, Poland. The mathematical model of the system considered in this work has been derived and verified based on measurements of dynamics of the real bell. We perform numerical analysis both by direct numerical integration and path-following method using toolbox ABESPOL (Chong, 2016). By introducing the active yoke the position of the bell-clapper system with respect to the yoke axis of rotation can be easily changed and it can be used to probe the system dynamics. We found a wide variety of periodic and non-periodic solutions, and examined the ranges of coexistence of solutions and transitions between them via different types of bifurcations. Finally, a new type of bifurcation induced by a grazing event - an "impact adding bifurcation" has been proposed. When it occurs, the number of impacts between the bell and the clapper is increasing while the period of the system's motion stays the same.
The simulation library of the Belle II software system
NASA Astrophysics Data System (ADS)
Kim, D. Y.; Ritter, M.; Bilka, T.; Bobrov, A.; Casarosa, G.; Chilikin, K.; Ferber, T.; Godang, R.; Jaegle, I.; Kandra, J.; Kodys, P.; Kuhr, T.; Kvasnicka, P.; Nakayama, H.; Piilonen, L.; Pulvermacher, C.; Santelj, L.; Schwenker, B.; Sibidanov, A.; Soloviev, Y.; Starič, M.; Uglov, T.
2017-10-01
SuperKEKB, the next generation B factory, has been constructed in Japan as an upgrade of KEKB. This brand new e+ e- collider is expected to deliver a very large data set for the Belle II experiment, which will be 50 times larger than the previous Belle sample. Both the triggered physics event rate and the background event rate will be increased by at least 10 times than the previous ones, and will create a challenging data taking environment for the Belle II detector. The software system of the Belle II experiment is designed to execute this ambitious plan. A full detector simulation library, which is a part of the Belle II software system, is created based on Geant4 and has been tested thoroughly. Recently the library has been upgraded with Geant4 version 10.1. The library is behaving as expected and it is utilized actively in producing Monte Carlo data sets for various studies. In this paper, we will explain the structure of the simulation library and the various interfaces to other packages including geometry and beam background simulation.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches. PMID:26785378
Recognition of fiducial marks applied to robotic systems. Thesis
NASA Technical Reports Server (NTRS)
Georges, Wayne D.
1991-01-01
The objective was to devise a method to determine the position and orientation of the links of a PUMA 560 using fiducial marks. As a result, it is necessary to design fiducial marks and a corresponding feature extraction algorithm. The marks used are composites of three basic shapes, a circle, an equilateral triangle and a square. Once a mark is imaged, it is thresholded and the borders of each shape are extracted. These borders are subsequently used in a feature extraction algorithm. Two feature extraction algorithms are used to determine which one produces the most reliable results. The first algorithm is based on moment invariants and the second is based on the discrete version of the psi-s curve of the boundary. The latter algorithm is clearly superior for this application.
Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs
NASA Astrophysics Data System (ADS)
Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Jiang Graves, Yan; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve
2013-12-01
Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine. This work was originally presented at the 54th AAPM annual meeting in Charlotte, NC, July 29-August 2, 2012.
The Belle II Silicon Vertex Detector
NASA Astrophysics Data System (ADS)
Friedl, M.; Ackermann, K.; Aihara, H.; Aziz, T.; Bergauer, T.; Bozek, A.; Campbell, A.; Dingfelder, J.; Drasal, Z.; Frankenberger, A.; Gadow, K.; Gfall, I.; Haba, J.; Hara, K.; Hara, T.; Higuchi, T.; Himori, S.; Irmler, C.; Ishikawa, A.; Joo, C.; Kah, D. H.; Kang, K. H.; Kato, E.; Kiesling, C.; Kodys, P.; Kohriki, T.; Koike, S.; Kvasnicka, P.; Marinas, C.; Mayekar, S. N.; Mibe, T.; Mohanty, G. B.; Moll, A.; Negishi, K.; Nakayama, H.; Natkaniec, Z.; Niebuhr, C.; Onuki, Y.; Ostrowicz, W.; Park, H.; Rao, K. K.; Ritter, M.; Rozanska, M.; Saito, T.; Sakai, K.; Sato, N.; Schmid, S.; Schnell, M.; Shimizu, N.; Steininger, H.; Tanaka, S.; Tanida, K.; Taylor, G.; Tsuboyama, T.; Ueno, K.; Uozumi, S.; Ushiroda, Y.; Valentan, M.; Yamamoto, H.
2013-12-01
The KEKB machine and the Belle experiment in Tsukuba (Japan) are now undergoing an upgrade, leading to an ultimate luminosity of 8×1035 cm-2 s-1 in order to measure rare decays in the B system with high statistics. The previous vertex detector cannot cope with this 40-fold increase of luminosity and thus needs to be replaced. Belle II will be equipped with a two-layer Pixel Detector surrounding the beam pipe, and four layers of double-sided silicon strip sensors at higher radii than the old detector. The Silicon Vertex Detector (SVD) will have a total sensitive area of 1.13 m2 and 223,744 channels-twice as many as its predecessor. All silicon sensors will be made from 150 mm wafers in order to maximize their size and thus to reduce the relative contribution of the support structure. The forward part has slanted sensors of trapezoidal shape to improve the measurement precision and to minimize the amount of material as seen by particles from the vertex. Fast-shaping front-end amplifiers will be used in conjunction with an online hit time reconstruction algorithm in order to reduce the occupancy to the level of a few percent at most. A novel “Origami” chip-on-sensor scheme is used to minimize both the distance between strips and amplifier (thus reducing the electronic noise) as well as the overall material budget. This report gives an overview on the status of the Belle II SVD and its components, including sensors, front-end detector ladders, mechanics, cooling and the readout electronics.
76 FR 3516 - Drawbridge Operation Regulation; Gulf Intracoastal Waterway, Belle Chasse, LA
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
...'Awlins Air Show, to be held at the U.S. Naval Air Station, Joint Reserve Base at Belle Chasse, Louisiana... expected to depart the Naval Air Station, Joint Reserve Base following the event. This year, the event is... year. A large number of the public is expected to attend the Naval Air Station Open House and Air Show...
Entanglement distillation protocols and number theory
NASA Astrophysics Data System (ADS)
Bombin, H.; Martin-Delgado, M. A.
2005-09-01
We show that the analysis of entanglement distillation protocols for qudits of arbitrary dimension D benefits from applying basic concepts from number theory, since the set ZDn associated with Bell diagonal states is a module rather than a vector space. We find that a partition of ZDn into divisor classes characterizes the invariant properties of mixed Bell diagonal states under local permutations. We construct a very general class of recursion protocols by means of unitary operations implementing these local permutations. We study these distillation protocols depending on whether we use twirling operations in the intermediate steps or not, and we study them both analytically and numerically with Monte Carlo methods. In the absence of twirling operations, we construct extensions of the quantum privacy algorithms valid for secure communications with qudits of any dimension D . When D is a prime number, we show that distillation protocols are optimal both qualitatively and quantitatively.
Track vertex reconstruction with neural networks at the first level trigger of Belle II
NASA Astrophysics Data System (ADS)
Neuhaus, Sara; Skambraks, Sebastian; Kiesling, Christian
2017-08-01
The track trigger is one of the main components of the Belle II first level trigger, taking input from the Central Drift Chamber (CDC). It consists of several stages, first combining hits to track segments, followed by a 2D track finding in the transverse plane and finally a 3D track reconstruction. The results of the track trigger are the track multiplicity, the momentum vector of each track and the longitudinal displacement of the origin or production vertex of each track ("z-vertex"). The latter allows to reject background tracks from outside of the interaction region and thus to suppress a large fraction of the machine background. This contribution focuses on the track finding stage using Hough transforms and on the z-vertex reconstruction with neural networks. We describe the algorithms and show performance studies on simulated events.
2013-04-16
instantaneous polarization model ( Pasion and Oldenburg, 2001). The target polarizability decay parameters are the main features for the ensuing...Bell, T., Geo-location Requirements for UXO Discrimination. SERDP Geo-location Workshop, 2005. Billings, S., Pasion , L., Lhomme, N. and Oldenburg...Report. ESTCP Project MR-201005, 2011b. Pasion , L. & Oldenburg, D., A Discrimination Algorithm for UXO Using Time Domain Electromagnetics. Journal of
1994-06-01
algorithms for large, irreducibly coupled systems iteratively solve concurrent problems within different subspaces of a Hilbert space, or within different...effective on problems amenable to SIMD solution. Together with researchers at AT&T Bell Labs (Boris Lubachevsky, Albert Greenberg ) we have developed...reasonable measurement. In the study of different speedups, various causes of superlinear speedup are also presented. Greenberg , Albert G., Boris D
Bello, Cesare; Osella, Giuseppe; Baviera, Cosimo
2017-10-13
The genus Dodomeira Bellò & Baviera gen. n. of the tribe Peritelini Lacordaire (1863) (Curculionidae: Entiminae) which includes 39 species is described. Seven species are transferred from Pseudomeira Stierlin, 1881: Dodomeira confusa (Pierotti, 2012) comb. n., Dodomeira exigua (Stierlin, 1861) comb. n., Dodomeira ficuzzensis (Bellò & Baviera, 2011) comb. n., Dodomeira himerensis (Bellò & Baviera, 2011) comb. n., Dodomeira petrensis (Bellò & Baviera, 2011) comb. n., Dodomeira pfisteri (Stierlin, 1864) comb. n., Dodomeira trinacriae (Bellò & Baviera, 2011) comb. n.. Thirty-two species are new for science and here described: Dodomeira adrianae Bellò & Baviera sp. n., Dodomeira alta Bellò & Baviera sp. n., Dodomeira angelae Bellò & Baviera sp. n., Dodomeira asinelliensis Bellò & Baviera sp. n., Dodomeira belicensis Bellò & Baviera sp. n., Dodomeira bertoni Bellò & Baviera sp. n., Dodomeira calatina Bellò & Baviera sp. n., Dodomeira caoduroi Bellò & Baviera sp. n. n., Dodomeira elima Bellò & Baviera sp. n., Dodomeira enzoi Bellò & Baviera sp. n., Dodomeira fossor Bellò & Baviera sp. n., Dodomeira forbicionii Bellò & Baviera sp. n., Dodomeira genistae Bellò & Baviera sp. n., Dodomeira giustoi Bellò & Baviera sp. n., Dodomeira hiemalis Bellò & Baviera sp. n., Dodomeira ibleiensis Bellò & Baviera sp. n., Dodomeira illuminatae Bellò & Baviera sp. n., Dodomeira juliae Bellò & Baviera sp. n., Dodomeira laliaensis Bellò & Baviera sp. n., Dodomeira magrinii Bellò & Baviera sp. n., Dodomeira montivaga Bellò & Baviera sp. n., Dodomeira margheritae Bellò & Baviera sp. n., Dodomeira maritimaensis Bellò & Baviera sp. n., Dodomeira nobilis Bellò & Baviera sp. n., Dodomeira paladinii Bellò & Baviera sp. n., Dodomeira sabellai Bellò & Baviera sp. n., Dodomeira saccoi Bellò & Baviera sp. n., Dodomeira sicana Bellò & Baviera sp. n., Dodomeira sicelidis Bellò & Baviera sp. n., Dodomeira siderea Bellò & Baviera sp. n., Dodomeira silvanae Bellò & Baviera sp. n., Dodomeira zingara Bellò & Baviera sp. n.. In addition, according to morphological characters, eight informal groups of species are established (the number of species ascribed to the group is in brackets): Dodomeira adrianae species group (13), Dodomeira caoduroi species group (2), Dodomeira exigua species group (5), Dodomeira ficuzzensis species group (2), Dodomeira maritimaensis species group (1), Dodomeira petrensis species group (2), Dodomeira pfisteri species group (13), Dodomeira saccoi species group (1). We present a key for the identification of the new genus among Palaearctic Peritelini, one for single species groups and an other for each species. A checklist of all the species currently known of Dodomeira gen. n. and Pseudomeira Stierlin (1881), with distribution maps and data on ecology and phenology of all the species of Dodomeira gen. n. are also provided.
Interconnect fatigue design for terrestrial photovoltaic modules
NASA Technical Reports Server (NTRS)
Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.
1982-01-01
The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.
Interconnect fatigue design for terrestrial photovoltaic modules
NASA Astrophysics Data System (ADS)
Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.
1982-03-01
The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.
Wang, Maocai; Dai, Guangming; Choo, Kim-Kwang Raymond; Jayaraman, Prem Prakash; Ranjan, Rajiv
2016-01-01
Information confidentiality is an essential requirement for cyber security in critical infrastructure. Identity-based cryptography, an increasingly popular branch of cryptography, is widely used to protect the information confidentiality in the critical infrastructure sector due to the ability to directly compute the user's public key based on the user's identity. However, computational requirements complicate the practical application of Identity-based cryptography. In order to improve the efficiency of identity-based cryptography, this paper presents an effective method to construct pairing-friendly elliptic curves with low hamming weight 4 under embedding degree 1. Based on the analysis of the Complex Multiplication(CM) method, the soundness of our method to calculate the characteristic of the finite field is proved. And then, three relative algorithms to construct pairing-friendly elliptic curve are put forward. 10 elliptic curves with low hamming weight 4 under 160 bits are presented to demonstrate the utility of our approach. Finally, the evaluation also indicates that it is more efficient to compute Tate pairing with our curves, than that of Bertoni et al.
Dai, Guangming
2016-01-01
Information confidentiality is an essential requirement for cyber security in critical infrastructure. Identity-based cryptography, an increasingly popular branch of cryptography, is widely used to protect the information confidentiality in the critical infrastructure sector due to the ability to directly compute the user’s public key based on the user’s identity. However, computational requirements complicate the practical application of Identity-based cryptography. In order to improve the efficiency of identity-based cryptography, this paper presents an effective method to construct pairing-friendly elliptic curves with low hamming weight 4 under embedding degree 1. Based on the analysis of the Complex Multiplication(CM) method, the soundness of our method to calculate the characteristic of the finite field is proved. And then, three relative algorithms to construct pairing-friendly elliptic curve are put forward. 10 elliptic curves with low hamming weight 4 under 160 bits are presented to demonstrate the utility of our approach. Finally, the evaluation also indicates that it is more efficient to compute Tate pairing with our curves, than that of Bertoni et al. PMID:27564373
Automated Optimization of Potential Parameters
Michele, Di Pierro; Ron, Elber
2013-01-01
An algorithm and software to refine parameters of empirical energy functions according to condensed phase experimental measurements are discussed. The algorithm is based on sensitivity analysis and local minimization of the differences between experiment and simulation as a function of potential parameters. It is illustrated for a toy problem of alanine dipeptide and is applied to folding of the peptide WAAAH. The helix fraction is highly sensitive to the potential parameters while the slope of the melting curve is not. The sensitivity variations make it difficult to satisfy both observations simultaneously. We conjecture that there is no set of parameters that reproduces experimental melting curves of short peptides that are modeled with the usual functional form of a force field. PMID:24015115
NASA Astrophysics Data System (ADS)
Sisodia, Mitali; Shukla, Abhishek; Pathak, Anirban
2017-12-01
A scheme for distributed quantum measurement that allows nondestructive or indirect Bell measurement was proposed by Gupta et al [1]. In the present work, Gupta et al.'s scheme is experimentally realized using the five-qubit super-conductivity-based quantum computer, which has been recently placed in cloud by IBM Corporation. The experiment confirmed that the Bell state can be constructed and measured in a nondestructive manner with a reasonably high fidelity. A comparison of the outcomes of this study and the results obtained earlier in an NMR-based experiment (Samal et al. (2010) [10]) has also been performed. The study indicates that to make a scalable SQUID-based quantum computer, errors introduced by the gates (in the present technology) have to be reduced considerably.
Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO
Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan
2018-01-01
Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983
Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.
Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan
2018-01-01
Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.
Reconstruction of quadratic curves in 3D using two or more perspective views: simulation studies
NASA Astrophysics Data System (ADS)
Kumar, Sanjeev; Sukavanam, N.; Balasubramanian, R.
2006-01-01
The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.
Single, Complete, Probability Spaces Consistent With EPR-Bohm-Bell Experimental Data
NASA Astrophysics Data System (ADS)
Avis, David; Fischer, Paul; Hilbert, Astrid; Khrennikov, Andrei
2009-03-01
We show that paradoxical consequences of violations of Bell's inequality are induced by the use of an unsuitable probabilistic description for the EPR-Bohm-Bell experiment. The conventional description (due to Bell) is based on a combination of statistical data collected for different settings of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory.
A programmable two-qubit quantum processor in silicon.
Watson, T F; Philips, S G J; Kawakami, E; Ward, D R; Scarlino, P; Veldhorst, M; Savage, D E; Lagally, M G; Friesen, Mark; Coppersmith, S N; Eriksson, M A; Vandersypen, L M K
2018-03-29
Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch-Josza algorithm and the Grover search algorithm-canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85-89 per cent and concurrences of 73-82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.
Li, Ying; Shi, Xiaohu; Liang, Yanchun; Xie, Juan; Zhang, Yu; Ma, Qin
2017-01-21
RNAs have been found to carry diverse functionalities in nature. Inferring the similarity between two given RNAs is a fundamental step to understand and interpret their functional relationship. The majority of functional RNAs show conserved secondary structures, rather than sequence conservation. Those algorithms relying on sequence-based features usually have limitations in their prediction performance. Hence, integrating RNA structure features is very critical for RNA analysis. Existing algorithms mainly fall into two categories: alignment-based and alignment-free. The alignment-free algorithms of RNA comparison usually have lower time complexity than alignment-based algorithms. An alignment-free RNA comparison algorithm was proposed, in which novel numerical representations RNA-TVcurve (triple vector curve representation) of RNA sequence and corresponding secondary structure features are provided. Then a multi-scale similarity score of two given RNAs was designed based on wavelet decomposition of their numerical representation. In support of RNA mutation and phylogenetic analysis, a web server (RNA-TVcurve) was designed based on this alignment-free RNA comparison algorithm. It provides three functional modules: 1) visualization of numerical representation of RNA secondary structure; 2) detection of single-point mutation based on secondary structure; and 3) comparison of pairwise and multiple RNA secondary structures. The inputs of the web server require RNA primary sequences, while corresponding secondary structures are optional. For the primary sequences alone, the web server can compute the secondary structures using free energy minimization algorithm in terms of RNAfold tool from Vienna RNA package. RNA-TVcurve is the first integrated web server, based on an alignment-free method, to deliver a suite of RNA analysis functions, including visualization, mutation analysis and multiple RNAs structure comparison. The comparison results with two popular RNA comparison tools, RNApdist and RNAdistance, showcased that RNA-TVcurve can efficiently capture subtle relationships among RNAs for mutation detection and non-coding RNA classification. All the relevant results were shown in an intuitive graphical manner, and can be freely downloaded from this server. RNA-TVcurve, along with test examples and detailed documents, are available at: http://ml.jlu.edu.cn/tvcurve/ .
On the distribution of saliency.
Berengolts, Alexander; Lindenbaum, Michael
2006-12-01
Detecting salient structures is a basic task in perceptual organization. Saliency algorithms typically mark edge-points with some saliency measure, which grows with the length and smoothness of the curve on which these edge-points lie. Here, we propose a modified saliency estimation mechanism that is based on probabilistically specified grouping cues and on curve length distributions. In this framework, the Shashua and Ullman saliency mechanism may be interpreted as a process for detecting the curve with maximal expected length. Generalized types of saliency naturally follow. We propose several specific generalizations (e.g., gray-level-based saliency) and rigorously derive the limitations on generalized saliency types. We then carry out a probabilistic analysis of expected length saliencies. Using ergodicity and asymptotic analysis, we derive the saliency distributions associated with the main curves and with the rest of the image. We then extend this analysis to finite-length curves. Using the derived distributions, we derive the optimal threshold on the saliency for discriminating between figure and background and bound the saliency-based figure-from-ground performance.
Suppression of Phytophthora capsici on bell pepper with isolates of Trichoderma
USDA-ARS?s Scientific Manuscript database
Biologically based disease management strategies, including biological control, are being developed for Phytophthora capsici on bell pepper. Biological control agents that are effective in controlling this disease under a number of soil environmental conditions when applied alone or with cover crop...
Controlled overspray spray nozzle
NASA Technical Reports Server (NTRS)
Prasthofer, W. P. (Inventor)
1981-01-01
A spray system for a multi-ingredient ablative material wherein a nozzle A is utilized for suppressing overspray is described. The nozzle includes a cyclindrical inlet which converges to a restricted throat. A curved juncture between the cylindrical inlet and the convergent portion affords unrestricted and uninterrupted flow of the ablative material. A divergent bell-shaped chamber and adjustable nozzle exit B is utilized which provides a highly effective spray pattern in suppressing overspray to an acceptable level and producing a homogeneous jet of material that adheres well to the substrate.
A View of the Therapy for Bell's Palsy Based on Molecular Biological Analyses of Facial Muscles.
Moriyama, Hiroshi; Mitsukawa, Nobuyuki; Itoh, Masahiro; Otsuka, Naruhito
2017-12-01
Details regarding the molecular biological features of Bell's palsy have not been widely reported in textbooks. We genetically analyzed facial muscles and clarified these points. We performed genetic analysis of facial muscle specimens from Japanese patients with severe (House-Brackmann facial nerve grading system V) and moderate (House-Brackmann facial nerve grading system III) dysfunction due to Bell's palsy. Microarray analysis of gene expression was performed using specimens from the healthy and affected sides, and gene expression was compared. Changes in gene expression were defined as an affected side/healthy side ratio of >1.5 or <0.5. We observed that the gene expression in Bell's palsy changes with the degree of facial nerve palsy. Especially, muscle, neuron, and energy category genes tended to fluctuate with the degree of facial nerve palsy. It is expected that this study will aid in the development of new treatments and diagnostic/prognostic markers based on the severity of facial nerve palsy.
Kumar, Joish Upendra; Kavitha, Y
2017-02-01
With the use of various surgical techniques, types of implants, the preoperative assessment of cochlear dimensions is becoming increasingly relevant prior to cochlear implantation. High resolution CISS protocol MRI gives a better assessment of membranous cochlea, cochlear nerve, and membranous labyrinth. Curved Multiplanar Reconstruction (MPR) algorithm provides better images that can be used for measuring dimensions of membranous cochlea. To ascertain the value of curved multiplanar reconstruction algorithm in high resolution 3-Dimensional T2 Weighted Gradient Echo Constructive Interference Steady State (3D T2W GRE CISS) imaging for accurate morphometry of membranous cochlea. Fourteen children underwent MRI for inner ear assessment. High resolution 3D T2W GRE CISS sequence was used to obtain images of cochlea. Curved MPR reconstruction algorithm was used to virtually uncoil the membranous cochlea on the volume images and cochlear measurements were done. Virtually uncoiled images of membranous cochlea of appropriate resolution were obtained from the volume data obtained from the high resolution 3D T2W GRE CISS images, after using curved MPR reconstruction algorithm mean membranous cochlear length in the children was 27.52 mm. Maximum apical turn diameter of membranous cochlea was 1.13 mm, mid turn diameter was 1.38 mm, basal turn diameter was 1.81 mm. Curved MPR reconstruction algorithm applied to CISS protocol images facilitates in getting appropriate quality images of membranous cochlea for accurate measurements.
Solution for the nonuniformity correction of infrared focal plane arrays.
Zhou, Huixin; Liu, Shangqian; Lai, Rui; Wang, Dabao; Cheng, Yubao
2005-05-20
Based on the S-curve model of the detector response of infrared focal plan arrays (IRFPAs), an improved two-point correction algorithm is presented. The algorithm first transforms the nonlinear image data into linear data and then uses the normal two-point algorithm to correct the linear data. The algorithm can effectively overcome the influence of nonlinearity of the detector's response, and it enlarges the correction precision and the dynamic range of the response. A real-time imaging-signal-processing system for IRFPAs that is based on a digital signal processor and field-programmable gate arrays is also presented. The nonuniformity correction capability of the presented solution is validated by experimental imaging procedures of a 128 x 128 pixel IRFPA camera prototype.
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
The DOHA algorithm: a new recipe for cotrending large-scale transiting exoplanet survey light curves
NASA Astrophysics Data System (ADS)
Mislis, D.; Pyrzas, S.; Alsubai, K. A.; Tsvetanov, Z. I.; Vilchez, N. P. E.
2017-03-01
We present
IMPLEMENTING A NOVEL CYCLIC CO2 FLOOD IN PALEOZOIC REEFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
James R. Wood; W. Quinlan; A. Wylie
2004-07-01
Recycled CO2 will be used in this demonstration project to produce bypassed oil from the Silurian Dover 35 pinnacle reef (Otsego County) in the Michigan Basin. We began injecting CO2 in the Dover 35 field into the Salling-Hansen 4-35A well on May 6, 2004. Subsurface characterization is being completed using well log tomography animations and 3D visualizations to map facies distributions and reservoir properties in three reefs, the Belle River Mills, Chester 18, and Dover 35 Fields. The Belle River Mills and Chester 18 fields are being used as type-fields because they have excellent log and/or core data coverage. Amplitudemore » slicing of the log porosity, normalized gamma ray, core permeability, and core porosity curves is showing trends that indicate significant heterogeneity and compartmentalization in these reservoirs associated with the original depositional fabric of the rocks. Digital and hard copy data continues to be compiled for the Niagaran reefs in the Michigan Basin. Technology transfer took place through technical presentations regarding visualization of the heterogeneity of the Niagaran reefs. Oral presentations were given at the Petroleum Technology Transfer Council workshop, Michigan Oil and Gas Association Conference, and Michigan Basin Geological Society meeting. A technical paper was submitted to the Bulletin of the American Association of Petroleum Geologists on the characterization of the Belle River Mills Field.« less
Predicting drug-target interactions by dual-network integrated logistic matrix factorization
NASA Astrophysics Data System (ADS)
Hao, Ming; Bryant, Stephen H.; Wang, Yanli
2017-01-01
In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.
A serum protein-based algorithm for the detection of Alzheimer disease.
O'Bryant, Sid E; Xiao, Guanghua; Barber, Robert; Reisch, Joan; Doody, Rachelle; Fairchild, Thomas; Adams, Perrie; Waring, Steven; Diaz-Arrastia, Ramon
2010-09-01
To develop an algorithm that separates patients with Alzheimer disease (AD) from controls. Longitudinal case-control study. The Texas Alzheimer's Research Consortium project. Patients We analyzed serum protein-based multiplex biomarker data from 197 patients diagnosed with AD and 203 controls. Main Outcome Measure The total sample was randomized equally into training and test sets and random forest methods were applied to the training set to create a biomarker risk score. The biomarker risk score had a sensitivity and specificity of 0.80 and 0.91, respectively, and an area under the curve of 0.91 in detecting AD. When age, sex, education, and APOE status were added to the algorithm, the sensitivity, specificity, and area under the curve were 0.94, 0.84, and 0.95, respectively. These initial data suggest that serum protein-based biomarkers can be combined with clinical information to accurately classify AD. A disproportionate number of inflammatory and vascular markers were weighted most heavily in the analyses. Additionally, these markers consistently distinguished cases from controls in significant analysis of microarray, logistic regression, and Wilcoxon analyses, suggesting the existence of an inflammatory-related endophenotype of AD that may provide targeted therapeutic opportunities for this subset of patients.
Do Bells Affect Behaviour and Heart Rate Variability in Grazing Dairy Cows?
Johns, Julia; Patt, Antonia; Hillmann, Edna
2015-01-01
In alpine regions cows are often equipped with bells. The present study investigated the impact of wearing a bell on behaviour and heart rate variability in dairy cows. Nineteen non-lactating Brown-Swiss cows with bell experience were assigned to three different treatments. For 3 days each, cows were equipped with no bell (control), with a bell with inactivated clapper (silent bell) or with a functional bell (functional bell). The bells weighed 5.5 kg and had frequencies between 532 Hz and 2.8 kHz and amplitudes between 90 and 113 dB at a distance of 20 cm. Data were collected on either the first and third or on all 3 days of each treatment. Whereas duration of rumination was reduced with a functional bell and a silent bell compared with no bell, feeding duration was reduced with a silent bell and was intermediate with a functional bell. Head movements were reduced when wearing a silent bell compared with no bell and tended to be reduced when wearing a functional compared to no bell. With a functional bell, lying duration was reduced by almost 4 hours on the third day of treatment compared with the first day with a functional bell and compared with no bell or a silent bell. All additional behavioural measures are consistent with the hypothesis of a restriction in the behaviour of the cows wearing bells, although this pattern did not reach significance. There was no treatment effect on heart rate variability, suggesting that the bells did not affect vago-sympathetic balance. An effect of experimental day was found for only 1 out of 10 behavioural parameters, as shown by a decrease in lying with a functional bell on day 3. The results indicate behavioural changes in the cows wearing a bell over 3 days, without indication of habituation to the bell. Altogether, the behavioural changes suggest that the behaviour of the cows was disturbed by wearing a bell. If long-lasting, these effects may have implications for animal welfare. PMID:26110277
Enhancement of the Daytime MODIS Based Aircraft Icing Potential Algorithm Using Mesoscale Model Data
2006-03-01
January, 15, 2006 ...... 37 x Figure 25. ROC curves using 3 hour PIREPs and Alexander Tmap with symbols plotted at the 0.5 threshold values...42 Figure 26. ROC curves using 3 hour PIREPs and Alexander Tmap with symbols plotted at the 0.5 threshold values...Table 4. Results using T icing potential values from the Alexander Tmap , and 3 Hour PIREPs
Enhancement of Fast Face Detection Algorithm Based on a Cascade of Decision Trees
NASA Astrophysics Data System (ADS)
Khryashchev, V. V.; Lebedev, A. A.; Priorov, A. L.
2017-05-01
Face detection algorithm based on a cascade of ensembles of decision trees (CEDT) is presented. The new approach allows detecting faces other than the front position through the use of multiple classifiers. Each classifier is trained for a specific range of angles of the rotation head. The results showed a high rate of productivity for CEDT on images with standard size. The algorithm increases the area under the ROC-curve of 13% compared to a standard Viola-Jones face detection algorithm. Final realization of given algorithm consist of 5 different cascades for frontal/non-frontal faces. One more thing which we take from the simulation results is a low computational complexity of CEDT algorithm in comparison with standard Viola-Jones approach. This could prove important in the embedded system and mobile device industries because it can reduce the cost of hardware and make battery life longer.
The Belle II Pixel Detector Data Acquisition and Background Suppression System
NASA Astrophysics Data System (ADS)
Lautenbach, K.; Deschamps, B.; Dingfelder, J.; Getzkow, D.; Geßler, T.; Konorov, I.; Kühn, W.; Lange, S.; Levit, D.; Liu, Z.-A.; Marinas, C.; Münchow, D.; Rabusov, A.; Reiter, S.; Spruck, B.; Wessel, C.; Zhao, J.
2017-06-01
The Belle II experiment at the future SuperKEKB collider in Tsukuba, Japan, features a design luminosity of 8 · 1035 cm-2s-1, which is a factor of 40 larger than that of its predecessor Belle. The pixel detector (PXD) with about 8 million pixels is based on the DEPFET technology and will improve the vertex resolution in beam direction by a factor of 2. With an estimated trigger rate of 30 kHz, the PXD is expected to generate a data rate of 20 GBytes/s, which is about 10 times larger than the amount of data generated by all other Belle II subdetectors. Due to the large beam-related background, the PXD requires a data acquisition system with high-bandwidth data links and realtime background reduction by a factor of 30. To achieve this, the Belle II pixel DAQ uses an FPGA-based computing platform with high speed serial links implemented in the ATCA (Advanced Telecommunications Computing Architecture) standard. The architecture and performance of the data acquisition system and data reduction of the PXD will be presented. In April 2016 and February 2017 a prototype PXD-DAQ system operated in a test beam campaign delivered data with the whole readout chain under realistic high rate conditions. Final results from the beam test will be presented.
Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study.
Zhang, Xiaoyong; Homma, Noriyasu; Ichiji, Kei; Takai, Yoshihiro; Yoshizawa, Makoto
2015-05-01
To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the tracking result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors' proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors' algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.
Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaoyong, E-mail: xiaoyong@ieee.org; Homma, Noriyasu, E-mail: homma@ieee.org; Ichiji, Kei, E-mail: ichiji@yoshizawa.ecei.tohoku.ac.jp
2015-05-15
Purpose: To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. Methods: A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the trackingmore » result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. Results: For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors’ proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. Conclusions: In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors’ algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.« less
Tseng, Chih-Chieh; Hu, Li-Yu; Liu, Mu-En; Yang, Albert C; Shen, Cheng-Che; Tsai, Shih-Jen
2017-06-01
Bell's palsy and anxiety disorders share numerous risk factors (e.g., immune response, ischemia, and psychological stress). However, there have been no studies on the bidirectional temporal association between the two illnesses. In this study, we used the Taiwan National Health Insurance Research Database (NHIRD) to test the bidirectional association between Bell's palsy and anxiety disorders. We hypothesized that patients with Bell's palsy would have an increased risk of subsequent anxiety disorders later in life and that, conversely, those with anxiety disorders would have an increased likelihood of developing Bell's palsy later in life. We conducted two retrospective cohort studies using Taiwan's National Health Insurance Research Database (NHIRD). Study 1 included 8070 patients diagnosed with anxiety disorders and 32,280 controls without anxiety disorders who were matched with sex, age, and enrollment date to analyze the following risk of Bell's palsy among both groups. Study 2 included 4980 patients with Bell's palsy and 19,920 controls without Bell's palsy who were matched with sex, age, and enrollment date to analyze the following risk of anxiety disorders among both groups. The patient records selected for the studies were dated between January 1, 2000, and December 31, 2004. All subjects were observed until their outcomes of interest, death or December 31, 2009. After adjustment for age, sex, comorbidities, urbanization, and income, the hazard ratio (HR) for patients with anxiety disorders to contract Bell's palsy was 1.53 (95% CI, 1.21-1.94, P<.001), and the HR for patients with Bell's palsy to develop an anxiety disorder was 1.59 (95% CI, 1.23-2.06, P<.001). This study found a bidirectional temporal association between Bell's palsy and anxiety disorders. After one of these conditions develops, the morbidity rate for the other significantly increases. Additional studies are required to determine whether these two conditions share the same pathogenic mechanisms, and whether successfully treating one will reduce the morbidity rate for the other. Copyright © 2017 Elsevier B.V. All rights reserved.
Edge Modes and Teleportation in a Topologically Insulating Quantum Wire
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghrear, Majd; Mackovic, Brie; Semenoff, Gordon W.
We find a simple model of an insulating state of a quantum wire which has a single isolated edge mode. We argue that, when brought to proximity, the edge modes on independent wires naturally form Bell entangled states which could be used for elementary quantum processes such as teleportation. We give an example of an algorithm which teleports the spin state of an electron from one quantum wire to another.
Methods to assess an exercise intervention trial based on 3-level functional data.
Li, Haocheng; Kozey Keadle, Sarah; Staudenmayer, John; Assaad, Houssein; Huang, Jianhua Z; Carroll, Raymond J
2015-10-01
Motivated by data recording the effects of an exercise intervention on subjects' physical activity over time, we develop a model to assess the effects of a treatment when the data are functional with 3 levels (subjects, weeks and days in our application) and possibly incomplete. We develop a model with 3-level mean structure effects, all stratified by treatment and subject random effects, including a general subject effect and nested effects for the 3 levels. The mean and random structures are specified as smooth curves measured at various time points. The association structure of the 3-level data is induced through the random curves, which are summarized using a few important principal components. We use penalized splines to model the mean curves and the principal component curves, and cast the proposed model into a mixed effects model framework for model fitting, prediction and inference. We develop an algorithm to fit the model iteratively with the Expectation/Conditional Maximization Either (ECME) version of the EM algorithm and eigenvalue decompositions. Selection of the number of principal components and handling incomplete data issues are incorporated into the algorithm. The performance of the Wald-type hypothesis test is also discussed. The method is applied to the physical activity data and evaluated empirically by a simulation study. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A novel semi-quantum secret sharing scheme based on Bell states
NASA Astrophysics Data System (ADS)
Yin, Aihan; Wang, Zefan; Fu, Fangbo
2017-05-01
A semi-quantum secret sharing (SQSS) scheme based on Bell states is proposed in this paper. The sender who can perform any relevant quantum operations uses Bell states to share the secret keys with her participants that are limited to perform classical operations on the transmitted qubits. It is found that our scheme is easy to generalize from three parties to multiparty and more efficient than the previous schemes [Q. Li, W. H. Chan and D. Y. Long, Phys. Rev. A 82 (2010) 022303; L. Z. Li, D. W. Qiu and P. Mateus, J. Phys. A: Math. Theor. 26 (2013) 045304; C. Xie, L. Z. Li and D. W. Qiu, Int. J. Theor. Phys. 54 (2015) 3819].
A Geometrical Approach to Bell's Theorem
NASA Technical Reports Server (NTRS)
Rubincam, David Parry
2000-01-01
Bell's theorem can be proved through simple geometrical reasoning, without the need for the Psi function, probability distributions, or calculus. The proof is based on N. David Mermin's explication of the Einstein-Podolsky-Rosen-Bohm experiment, which involves Stern-Gerlach detectors which flash red or green lights when detecting spin-up or spin-down. The statistics of local hidden variable theories for this experiment can be arranged in colored strips from which simple inequalities can be deduced. These inequalities lead to a demonstration of Bell's theorem. Moreover, all local hidden variable theories can be graphed in such a way as to enclose their statistics in a pyramid, with the quantum-mechanical result lying a finite distance beneath the base of the pyramid.
ERIC Educational Resources Information Center
Parsley, Nancy J.; Rabinowitz, F. Michael
1975-01-01
Attempts to integrate the ethological interpretation of the Bell and Ainsworth findings on the promptness of mother reaction to infant crying, with operant laboratory infant research. Suggests an alternative operant interpretation based on the concept of counter conditioning. (Author/ED)
The Principles of Economics from Now until Then: A Comment.
ERIC Educational Resources Information Center
Amacher, Ryan C.
1988-01-01
Comments on Bell's article "The Principles of Economics from Now until Then," responding to the author's two general premises as well as citing specific arguments. Concludes that Bell's premises are based on a small and unrepresentative number of textbooks. Many textbooks that present material according to the recommendations were…
Aqueous alteration and brecciation in Bells, an unusual, saponite-bearing, CM chondrite
NASA Astrophysics Data System (ADS)
Brearley, Adrian J.
1995-06-01
The petrological and mineralogical characteristics of the unusual CM2 chondrite, Bells, have been investigated in detail by scanning electron microscopy (SEM), electron microprobe analysis (EPMA), and transmission electron microscopy (TEM). Bells is a highly brecciated chondrite which contains few intact chondrules, a very low abundance of refractory inclusions, and is notable in having an unusually high abundance of magnetite, which is disseminated throughout the fine-grained matrix. Fragmental olivines and pyroxenes are common and, based on compositional data, appear to have been derived from chondrules as a result of extensive brecciation. The fine-grained mineralogy of matrix in Bells differs considerably from other CM chondrites and has closer affinities to matrix in CI chondrites. The dominant phases are fine-grained saponite interlayered with serpentine, and phases such as tochilinite and cronstedtite, which are typical of CM chondrite matrices, are entirely absent. Pentlandite, pyrrhotite, magnetite, anhydrite, calcite, and rare Ti-oxides also occur as accessory phases. Based on its oxygen and noble gas isotopic compositions (Zadnik, 1985; Rowe et al., 1994), Bells can be considered to be a CM2 chondrite, although its bulk composition shows some departures from the typical range exhibited by this group. However, these variations in bulk chemistry are entirely consistent with the observed mineralogy of Bells. The unusual fine-grained mineralogy of Bells matrix can be reasonably attributed to the combined effects of aqueous alteration and advanced brecciation in a parent body environment. Extensive brecciation has assisted aqueous alteration by reducing chondrules and mineral grains into progressively smaller grains with high surface areas, which are more susceptible to dissolution reactions involving aqueous fluids. This has resulted in the preferential dissolution of Fe-rich chondrule olivines, which are now completely absent in Bells although present in other CM chondrites. The formation of saponite in Bells probably resulted from the dissolution of relatively silica-rich phases, such as pyroxene and olivine, that were derived from chondrules. The result of such dissolution reactions would be to increase the activity of silica in the fluid phase, at least on a localized scale, stabilizing saponite in preference to serpentine. An increase in aSiO 2 would also have destabilized preexisting cronstedtite which may have reacted to form magnetite and MgFe serpentine under conditions of constant ƒO 2 .
Ruijter, Jan M; Pfaffl, Michael W; Zhao, Sheng; Spiess, Andrej N; Boggy, Gregory; Blom, Jochen; Rutledge, Robert G; Sisti, Davide; Lievens, Antoon; De Preter, Katleen; Derveaux, Stefaan; Hellemans, Jan; Vandesompele, Jo
2013-01-01
RNA transcripts such as mRNA or microRNA are frequently used as biomarkers to determine disease state or response to therapy. Reverse transcription (RT) in combination with quantitative PCR (qPCR) has become the method of choice to quantify small amounts of such RNA molecules. In parallel with the democratization of RT-qPCR and its increasing use in biomedical research or biomarker discovery, we witnessed a growth in the number of gene expression data analysis methods. Most of these methods are based on the principle that the position of the amplification curve with respect to the cycle-axis is a measure for the initial target quantity: the later the curve, the lower the target quantity. However, most methods differ in the mathematical algorithms used to determine this position, as well as in the way the efficiency of the PCR reaction (the fold increase of product per cycle) is determined and applied in the calculations. Moreover, there is dispute about whether the PCR efficiency is constant or continuously decreasing. Together this has lead to the development of different methods to analyze amplification curves. In published comparisons of these methods, available algorithms were typically applied in a restricted or outdated way, which does not do them justice. Therefore, we aimed at development of a framework for robust and unbiased assessment of curve analysis performance whereby various publicly available curve analysis methods were thoroughly compared using a previously published large clinical data set (Vermeulen et al., 2009) [11]. The original developers of these methods applied their algorithms and are co-author on this study. We assessed the curve analysis methods' impact on transcriptional biomarker identification in terms of expression level, statistical significance, and patient-classification accuracy. The concentration series per gene, together with data sets from unpublished technical performance experiments, were analyzed in order to assess the algorithms' precision, bias, and resolution. While large differences exist between methods when considering the technical performance experiments, most methods perform relatively well on the biomarker data. The data and the analysis results per method are made available to serve as benchmark for further development and evaluation of qPCR curve analysis methods (http://qPCRDataMethods.hfrc.nl). Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bezprozvanny, Llya; Watras, James; Ehrlich, Barbara E.
1991-06-01
RELEASE of calcium from intracellular stores occurs by two pathways, an inositol 1,4,5-trisphosphate (InsP3)-gated channel1-3 and a calcium-gated channel (ryanodine receptor)4-6. Using specific antibodies, both receptors were found in Purkinje cells of cerebellum7,8. We have now compared the functional properties of the channels corresponding to the two receptors by incorporating endoplasmic reticulum vesicles from canine cerebellum into planar bilayers. InsP3-gated channels were observed most frequently. Another channel type was activated by adenine nucleotides or caffeine, inhibited by ruthenium red, and modified by ryanodine, characteristics of the ryanodine receptor/channel6. The open probability of both channel types displayed a bell-shaped curve for dependence on calcium. For the InsP3-gated channel, the maximum probability of opening occurred at 0.2 µM free calcium, with sharp decreases on either side of the maximum. Maximum activity for the ryanodine receptor/channel was maintained between 1 and 100 µM calcium. Thus, within the physiological range of cytoplasmic calcium, the InsP3-gated channel itself allows positive feed-back and then negative feedback for calcium release, whereas the ryanodine receptor/channel behaves solely as a calcium-activated channel. The existence in the same cell of two channels with different responses to calcium and different ligand sensitivities provides a basis for complex patterns of intracellular calcium regulation.
[An Improved Cubic Spline Interpolation Method for Removing Electrocardiogram Baseline Drift].
Wang, Xiangkui; Tang, Wenpu; Zhang, Lai; Wu, Minghu
2016-04-01
The selection of fiducial points has an important effect on electrocardiogram(ECG)denoise with cubic spline interpolation.An improved cubic spline interpolation algorithm for suppressing ECG baseline drift is presented in this paper.Firstly the first order derivative of original ECG signal is calculated,and the maximum and minimum points of each beat are obtained,which are treated as the position of fiducial points.And then the original ECG is fed into a high pass filter with 1.5Hz cutoff frequency.The difference between the original and the filtered ECG at the fiducial points is taken as the amplitude of the fiducial points.Then cubic spline interpolation curve fitting is used to the fiducial points,and the fitting curve is the baseline drift curve.For the two simulated case test,the correlation coefficients between the fitting curve by the presented algorithm and the simulated curve were increased by 0.242and0.13 compared with that from traditional cubic spline interpolation algorithm.And for the case of clinical baseline drift data,the average correlation coefficient from the presented algorithm achieved 0.972.
Nonlocality without counterfactual reasoning
NASA Astrophysics Data System (ADS)
Wolf, Stefan
2015-11-01
Nonlocal correlations are usually understood through the outcomes of alternative measurements (on two or more parts of a system) that cannot altogether actually be carried out in an experiment. Indeed, a joint input-output — e.g., measurement-setting-outcome — behavior is nonlocal if and only if the outputs for all possible inputs cannot coexist consistently. It has been argued that this counterfactual view is how Bell's inequalities and their violations are to be seen. I propose an alternative perspective which refrains from setting into relation the results of mutually exclusive measurements, but that is based solely on data actually available. My approach uses algorithmic complexity instead of probability and randomness, and implies that nonlocality has consequences similar to those in the probabilistic view. Our view is conceptually simpler than the traditional reasoning.
NASA Astrophysics Data System (ADS)
Wang, Qingjie; Xin, Jingmin; Wu, Jiayi; Zheng, Nanning
2017-03-01
Microaneurysms are the earliest clinic signs of diabetic retinopathy, and many algorithms were developed for the automatic classification of these specific pathology. However, the imbalanced class distribution of dataset usually causes the classification accuracy of true microaneurysms be low. Therefore, by combining the borderline synthetic minority over-sampling technique (BSMOTE) with the data cleaning techniques such as Tomek links and Wilson's edited nearest neighbor rule (ENN) to resample the imbalanced dataset, we propose two new support vector machine (SVM) classification algorithms for the microaneurysms. The proposed BSMOTE-Tomek and BSMOTE-ENN algorithms consist of: 1) the adaptive synthesis of the minority samples in the neighborhood of the borderline, and 2) the remove of redundant training samples for improving the efficiency of data utilization. Moreover, the modified SVM classifier with probabilistic outputs is used to divide the microaneurysm candidates into two groups: true microaneurysms and false microaneurysms. The experiments with a public microaneurysms database shows that the proposed algorithms have better classification performance including the receiver operating characteristic (ROC) curve and the free-response receiver operating characteristic (FROC) curve.
Assessment of SPOT-6 optical remote sensing data against GF-1 using NNDiffuse image fusion algorithm
NASA Astrophysics Data System (ADS)
Zhao, Jinling; Guo, Junjie; Cheng, Wenjie; Xu, Chao; Huang, Linsheng
2017-07-01
A cross-comparison method was used to assess the SPOT-6 optical satellite imagery against Chinese GF-1 imagery using three types of indicators: spectral and color quality, fusion effect and identification potential. More specifically, spectral response function (SRF) curves were used to compare the two imagery, showing that the SRF curve shape of SPOT-6 is more like a rectangle compared to GF-1 in blue, green, red and near-infrared bands. NNDiffuse image fusion algorithm was used to evaluate the capability of information conservation in comparison with wavelet transform (WT) and principal component (PC) algorithms. The results show that NNDiffuse fused image has extremely similar entropy vales than original image (1.849 versus 1.852) and better color quality. In addition, the object-oriented classification toolset (ENVI EX) was used to identify greenlands for comparing the effect of self-fusion image of SPOT-6 and inter-fusion image between SPOT-6 and GF-1 based on the NNDiffuse algorithm. The overall accuracy is 97.27% and 76.88%, respectively, showing that self-fused image of SPOT-6 has better identification capability.
Research on Environmental Adjustment of Cloud Ranch Based on BP Neural Network PID Control
NASA Astrophysics Data System (ADS)
Ren, Jinzhi; Xiang, Wei; Zhao, Lin; Wu, Jianbo; Huang, Lianzhen; Tu, Qinggang; Zhao, Heming
2018-01-01
In order to make the intelligent ranch management mode replace the traditional artificial one gradually, this paper proposes a pasture environment control system based on cloud server, and puts forward the PID control algorithm based on BP neural network to control temperature and humidity better in the pasture environment. First, to model the temperature and humidity (controlled object) of the pasture, we can get the transfer function. Then the traditional PID control algorithm and the PID one based on BP neural network are applied to the transfer function. The obtained step tracking curves can be seen that the PID controller based on BP neural network has obvious superiority in adjusting time and error, etc. This algorithm, calculating reasonable control parameters of the temperature and humidity to control environment, can be better used in the cloud service platform.
Quantum cryptography using entangled photons in energy-time bell states
Tittel; Brendel; Zbinden; Gisin
2000-05-15
We present a setup for quantum cryptography based on photon pairs in energy-time Bell states and show its feasibility in a laboratory experiment. Our scheme combines the advantages of using photon pairs instead of faint laser pulses and the possibility to preserve energy-time entanglement over long distances. Moreover, using four-dimensional energy-time states, no fast random change of bases is required in our setup: Nature itself decides whether to measure in the energy or in the time base, thus rendering eavesdropper attacks based on "photon number splitting" less efficient.
Rocchini, Duccio
2009-01-01
Measuring heterogeneity in satellite imagery is an important task to deal with. Most measures of spectral diversity have been based on Shannon Information theory. However, this approach does not inherently address different scales, ranging from local (hereafter referred to alpha diversity) to global scales (gamma diversity). The aim of this paper is to propose a method for measuring spectral heterogeneity at multiple scales based on rarefaction curves. An algorithmic solution of rarefaction applied to image pixel values (Digital Numbers, DNs) is provided and discussed. PMID:22389600
The computation of all plane/surface intersections for CAD/CAM applications
NASA Technical Reports Server (NTRS)
Hoitsma, D. H., Jr.; Roche, M.
1984-01-01
The problem of the computation and display of all intersections of a given plane with a rational bicubic surface patch for use on an interactive CAD/CAM system is examined. The general problem of calculating all intersections of a plane and a surface consisting of rational bicubic patches is reduced to the case of a single generic patch by applying a rejection algorithm which excludes all patches that do not intersect the plane. For each pertinent patch the algorithm presented computed the intersection curves by locating an initial point on each curve, and computes successive points on the curve using a tolerance step equation. A single cubic equation solver is used to compute the initial curve points lying on the boundary of a surface patch, and the method of resultants as applied to curve theory is used to determine critical points which, in turn, are used to locate initial points that lie on intersection curves which are in the interior of the patch. Examples are given to illustrate the ability of this algorithm to produce all intersection curves.
Quantitative three-dimensional transrectal ultrasound (TRUS) for prostate imaging
NASA Astrophysics Data System (ADS)
Pathak, Sayan D.; Aarnink, Rene G.; de la Rosette, Jean J.; Chalana, Vikram; Wijkstra, Hessel; Haynor, David R.; Debruyne, Frans M. J.; Kim, Yongmin
1998-06-01
With the number of men seeking medical care for prostate diseases rising steadily, the need of a fast and accurate prostate boundary detection and volume estimation tool is being increasingly experienced by the clinicians. Currently, these measurements are made manually, which results in a large examination time. A possible solution is to improve the efficiency by automating the boundary detection and volume estimation process with minimal involvement from the human experts. In this paper, we present an algorithm based on SNAKES to detect the boundaries. Our approach is to selectively enhance the contrast along the edges using an algorithm called sticks and integrate it with a SNAKES model. This integrated algorithm requires an initial curve for each ultrasound image to initiate the boundary detection process. We have used different schemes to generate the curves with a varying degree of automation and evaluated its effects on the algorithm performance. After the boundaries are identified, the prostate volume is calculated using planimetric volumetry. We have tested our algorithm on 6 different prostate volumes and compared the performance against the volumes manually measured by 3 experts. With the increase in the user inputs, the algorithm performance improved as expected. The results demonstrate that given an initial contour reasonably close to the prostate boundaries, the algorithm successfully delineates the prostate boundaries in an image, and the resulting volume measurements are in close agreement with those made by the human experts.
Bidirectional Teleportation Protocol in Quantum Wireless Multi-hop Network
NASA Astrophysics Data System (ADS)
Cai, Rui; Yu, Xu-Tao; Zhang, Zai-Chen
2018-06-01
We propose a bidirectional quantum teleportation protocol based on a composite GHZ-Bell state. In this protocol, the composite GHZ-Bell state channel is transformed into two-Bell state channel through gate operations and single qubit measurements. The channel transformation will lead to different kinds of quantum channel states, so a method is proposed to help determine the unitary matrices effectively under different quantum channels. Furthermore, we discuss the bidirectional teleportation protocol in the quantum wireless multi-hop network. This paper is aimed to provide a bidirectional teleportation protocol and study the bidirectional multi-hop teleportation in the quantum wireless communication network.
Bidirectional Teleportation Protocol in Quantum Wireless Multi-hop Network
NASA Astrophysics Data System (ADS)
Cai, Rui; Yu, Xu-Tao; Zhang, Zai-Chen
2018-02-01
We propose a bidirectional quantum teleportation protocol based on a composite GHZ-Bell state. In this protocol, the composite GHZ-Bell state channel is transformed into two-Bell state channel through gate operations and single qubit measurements. The channel transformation will lead to different kinds of quantum channel states, so a method is proposed to help determine the unitary matrices effectively under different quantum channels. Furthermore, we discuss the bidirectional teleportation protocol in the quantum wireless multi-hop network. This paper is aimed to provide a bidirectional teleportation protocol and study the bidirectional multi-hop teleportation in the quantum wireless communication network.
DOT National Transportation Integrated Search
1977-04-01
This data report contains the measured noise levels obtained from an FAA Helicopter Noise Test Program. The purpose of this test program was to provide a data base for a possible helicopter noise certification rule. The noise data presented in this t...
Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan
2018-02-01
Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.
NASA Astrophysics Data System (ADS)
Kumar, Saurabh; Amrutur, Bharadwaj; Asokan, Sundarrajan
2018-02-01
Fiber Bragg Grating (FBG) sensors have become popular for applications related to structural health monitoring, biomedical engineering, and robotics. However, for successful large scale adoption, FBG interrogation systems are as important as sensor characteristics. Apart from accuracy, the required number of FBG sensors per fiber and the distance between the device in which the sensors are used and the interrogation system also influence the selection of the interrogation technique. For several measurement devices developed for applications in biomedical engineering and robotics, only a few sensors per fiber are required and the device is close to the interrogation system. For these applications, interrogation systems based on InGaAs linear detector arrays provide a good choice. However, their resolution is dependent on the algorithms used for curve fitting. In this work, a detailed analysis of the choice of algorithm using the Gaussian approximation for the FBG spectrum and the number of pixels used for curve fitting on the errors is provided. The points where the maximum errors occur have been identified. All comparisons for wavelength shift detection have been made against another interrogation system based on the tunable swept laser. It has been shown that maximum errors occur when the wavelength shift is such that one new pixel is included for curve fitting. It has also been shown that an algorithm with lower computation cost compared to the more popular methods using iterative non-linear least squares estimation can be used without leading to the loss of accuracy. The algorithm has been implemented on embedded hardware, and a speed-up of approximately six times has been observed.
NASA Astrophysics Data System (ADS)
Ye, Tian-Yu
2016-09-01
Recently, Liu et al. proposed a two-party quantum private comparison (QPC) protocol using entanglement swapping of Bell entangled state (Commun. Theor. Phys. 57 (2012) 583). Subsequently Liu et al. pointed out that in Liu et al.'s protocol, the TP can extract the two users' secret inputs without being detected by launching the Bell-basis measurement attack, and suggested the corresponding improvement to mend this loophole (Commun. Theor. Phys. 62 (2014) 210). In this paper, we first point out the information leakage problem toward TP existing in both of the above two protocols, and then suggest the corresponding improvement by using the one-way hash function to encrypt the two users' secret inputs. We further put forward the three-party QPC protocol also based on entanglement swapping of Bell entangled state, and then validate its output correctness and its security in detail. Finally, we generalize the three-party QPC protocol into the multi-party case, which can accomplish arbitrary pair's comparison of equality among K users within one execution. Supported by the National Natural Science Foundation of China under Grant No. 61402407
NASA Astrophysics Data System (ADS)
Beterov, I. I.; Hamzina, G. N.; Yakshina, E. A.; Tretyakov, D. B.; Entin, V. M.; Ryabtsev, I. I.
2018-03-01
High-fidelity entangled Bell states are of great interest in quantum physics. Entanglement of ultracold neutral atoms in two spatially separated optical dipole traps is promising for implementation of quantum computing and quantum simulation and for investigation of Bell states of material objects. We propose a method to entangle two atoms via long-range Rydberg-Rydberg interaction. Alternative to previous approaches, based on Rydberg blockade, we consider radio-frequency-assisted Stark-tuned Förster resonances in Rb Rydberg atoms. To reduce the sensitivity of the fidelity of Bell states to the fluctuations of interatomic distance, we propose to use the double adiabatic passage across the radio-frequency-assisted Stark-tuned Förster resonances, which results in a deterministic phase shift of the collective two-atom state.
Remote sensing for urban planning
NASA Technical Reports Server (NTRS)
Davis, Bruce A.; Schmidt, Nicholas; Jensen, John R.; Cowen, Dave J.; Halls, Joanne; Narumalani, Sunil; Burgess, Bryan
1994-01-01
Utility companies are challenged to provide services to a highly dynamic customer base. With factory closures and shifts in employment becoming a routine occurrence, the utility industry must develop new techniques to maintain records and plan for expected growth. BellSouth Telecommunications, the largest of the Bell telephone companies, currently serves over 13 million residences and 2 million commercial customers. Tracking the movement of customers and scheduling the delivery of service are major tasks for BellSouth that require intensive manpower and sophisticated information management techniques. Through NASA's Commercial Remote Sensing Program Office, BellSouth is investigating the utility of remote sensing and geographic information system techniques to forecast residential development. This paper highlights the initial results of this project, which indicate a high correlation between the U.S. Bureau of Census block group statistics and statistics derived from remote sensing data.
Using evaluation to improve program quality based on the BELL model.
Phalen, Earl Martin; Cooper, Tiffany M
2007-01-01
Building Educated Leaders for Life (BELL) is a national not-for-profit organization whose mission is to increase the educational achievements, self-esteem, and life opportunities of elementary school children living in low-income urban communities. BELL has been engaged in formal evaluation, internally and externally, for more than five years and has built internal evaluation capacity by investing in a specialized full-time evaluation team. As part of a continuous program improvement model of evaluation, BELL uses the data to refine program implementation and replicate successful elements of the services and operations. In this chapter, the authors highlight best practices from the field by outlining BELL's approach to using evaluation data for continuous program improvement. Key strategies include (1) carefully identifying intended users of the evaluation throughout the organization and among its external stakeholders, then working closely with intended users throughout the evaluation process, ensuring full engagement at every step of the process; (2) reporting findings in a readable, user-friendly format and timing the reporting so that it is aligned with programmatic decision making and planning cycles; and (3) making and supporting explicit recommendations for the next program cycle, where intended users have agreed to recommendations and ownership is assigned. BELL's successful use of data for improvement is evidenced by the consistently strong outcomes for the students it serves as well as increased efficiency and satisfaction related to service delivery that has supported the replication of BELL's programs nationally.
Bell's palsy in children: Current treatment patterns in Australia and New Zealand. A PREDICT study.
Babl, Franz E; Gardiner, Kaya K; Kochar, Amit; Wilson, Catherine L; George, Shane A; Zhang, Michael; Furyk, Jeremy; Thosar, Deepali; Cheek, John A; Krieser, David; Rao, Arjun S; Borland, Meredith L; Cheng, Nicholas; Phillips, Natalie T; Sinn, Kam K; Neutze, Jocelyn M; Dalziel, Stuart R
2017-04-01
The aetiology and clinical course of Bell's palsy may be different in paediatric and adult patients. There is no randomised placebo controlled trial (RCT) to show effectiveness of prednisolone for Bell's palsy in children. The aim of the study was to assess current practice in paediatric Bell's palsy in Australia and New Zealand Emergency Departments (ED) and determine the feasibility of conducting a multicentre RCT within the Paediatric Research in Emergency Departments International Collaborative (PREDICT). A retrospective analysis of ED medical records of children less than 18 years diagnosed with Bell's palsy between 1 January, 2012 and 31 December, 2013 was performed. Potential participants were identified from ED information systems using Bell's palsy related search terms. Repeat presentations during the same illness were excluded but relapses were not. Data on presentation, diagnosis and management were entered into an online data base (REDCap). Three hundred and twenty-three presentations were included from 14 PREDICT sites. Mean age at presentation was 9.0 (SD 5.0) years with 184 (57.0%) females. Most (238, 73.7%) presented to ED within 72 h of symptoms, 168 (52.0%) had seen a doctor prior. In ED, 218 (67.5%) were treated with steroids. Prednisolone was usually prescribed for 9 days at around 1 mg/kg/day, with tapering in 35.7%. Treatment of Bell's palsy in children presenting to Australasian EDs is varied. Prednisolone is commonly used in Australasian EDs, despite lack of high-level paediatric evidence. The study findings confirm the feasibility of an RCT of prednisolone for Bell's palsy in children. © 2017 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).
Management of blight of bell pepper (Capsicum annuum var. grossum) caused by Drechslera bicolor.
Jadon, Kuldeep Singh; Shah, Rakesh; Gour, Hari Narayan; Sharma, Pankaj
Sweet or bell pepper is a member of the Solanaceae family and is regarded as one of the most popular and nutritious vegetable. Blight, in the form of leaf and fruit blight, has been observed to infect bell pepper crops cultivated at the horticulture farm in Rajasthan College of Agriculture, Udaipur, India. Based on disease severity, we attempted to curb this newly emerged problem using different fungicides, plant extracts, bio-control agents, and commercial botanicals against the fungus in laboratory and pot experiments. Bio-control agent Trichoderma viride and plant growth promoting Rhizobacteria (PGPR) isolate Neist-2 were found to be quite effective against bell pepper blight. All evaluated fungicides, botanicals, commercial botanicals, and bio-control agents in vitro were further studied as seed dressers and two foliar sprays at ten days interval in pot experiments. The combinations of Vitavax, PGPR isolate Neist-2, and Mehandi extract were found to be very effective against bell pepper blight followed by Vitavax, T. viride, and Mehandi extract used individually. All treatments in the pot experiments were found to significantly reduce seedling mortality and enhance plant biomass of bell pepper. Thus, these experimental findings suggest that a better integrated management of bell pepper blight could be achieved by conducting field trials in major bell pepper- and chilli-cultivated areas of the state. Besides fungicides, different botanicals and commercial botanicals also seem to be promising treatment options. Therefore, the outcome of the present study provides an alternate option of fungicide use in minimizing loss caused by Drechslera bicolor. Copyright © 2016 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.
SHAMROCK: A Synthesizable High Assurance Cryptography and Key Management Coprocessor
2016-11-01
and excluding devices from a communicating group as they become trusted, or untrusted. An example of using rekeying to dynamically adjust group...algorithms, such as the Elliptic Curve Digital Signature Algorithm (ECDSA), work by computing a cryptographic hash of a message using, for example , the...material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8721- 05-C
NASA Astrophysics Data System (ADS)
Bulgakov, V. K.; Strigunov, V. V.
2009-05-01
The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.
Biostability analysis for drinking water distribution systems.
Srinivasan, Soumya; Harrington, Gregory W
2007-05-01
The ability to limit regrowth in drinking water is referred to as biological stability and depends on the concentration of disinfectant residual and on the concentration of substrate required for the growth of microorganisms. The biostability curve, based on this fundamental concept of biological stability, is a graphical approach to study the two competing effects that determine bacterial regrowth in a distribution system: inactivation due to the presence of a disinfectant, and growth due to the presence of a substrate. Biostability curves are a practical, system specific approach for addressing the problem of bacterial regrowth in distribution systems. This paper presents a standardized algorithm for generating biostability curves and this will enable water utilities to incorporate this approach for their site-specific needs. Using data from pilot scale studies, it was found that this algorithm was applicable to control regrowth of HPC in chlorinated systems where AOC is the growth limiting substrate, and growth of AOB in chloraminated systems, where ammonia is the growth limiting substrate.
Image-based spectroscopy for environmental monitoring
NASA Astrophysics Data System (ADS)
Bachmakov, Eduard; Molina, Carolyn; Wynne, Rosalind
2014-03-01
An image-processing algorithm for use with a nano-featured spectrometer chemical agent detection configuration is presented. The spectrometer chip acquired from Nano-Optic DevicesTM can reduce the size of the spectrometer down to a coin. The nanospectrometer chip was aligned with a 635nm laser source, objective lenses, and a CCD camera. The images from a nanospectrometer chip were collected and compared to reference spectra. Random background noise contributions were isolated and removed from the diffraction pattern image analysis via a threshold filter. Results are provided for the image-based detection of the diffraction pattern produced by the nanospectrometer. The featured PCF spectrometer has the potential to measure optical absorption spectra in order to detect trace amounts of contaminants. MATLAB tools allow for implementation of intelligent, automatic detection of the relevant sub-patterns in the diffraction patterns and subsequent extraction of the parameters using region-detection algorithms such as the generalized Hough transform, which detects specific shapes within the image. This transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. By employing this imageprocessing technique, future sensor systems will benefit from new applications such as unsupervised environmental monitoring of air or water quality.
van de Graaf, R C; IJpma, F F A; Nicolai, J-P A; Werker, P M N
2009-11-01
Bell's palsy is the eponym for idiopathic peripheral facial paralysis. It is named after Sir Charles Bell (1774-1842), who, in the first half of the nineteenth century, discovered the function of the facial nerve and attracted the attention of the medical world to facial paralysis. Our knowledge of this condition before Bell's landmark publications is very limited and is based on just a few documents. In 1804 and 1805, Evert Jan Thomassen à Thuessink (1762-1832) published what appears to be the first known extensive study on idiopathic peripheral facial paralysis. His description of this condition was quite accurate. He located several other early descriptions and concluded from this literature that, previously, the condition had usually been confused with other afflictions (such as 'spasmus cynicus', central facial paralysis and trigeminal neuralgia). According to Thomassen à Thuessink, idiopathic peripheral facial paralysis and trigeminal neuralgia were related, being different expressions of the same condition. Thomassen à Thuessink believed that idiopathic peripheral facial paralysis was caused by 'rheumatism' or exposure to cold. Many aetiological theories have since been proposed. Despite this, the cold hypothesis persists even today.
Distribution of Bell-inequality violation versus multiparty-quantum-correlation measures
NASA Astrophysics Data System (ADS)
Sharma, Kunal; Das, Tamoghna; SenDe, Aditi; Sen, Ujjwal
2016-06-01
Violation of a Bell inequality guarantees the existence of quantum correlations in a shared quantum state. A pure bipartite quantum state, having nonvanishing quantum correlation, always violates a Bell inequality. Such correspondence is absent for multipartite pure quantum states in the case of multipartite correlation function Bell inequalities with two settings at each site. We establish a connection between the monogamy of Bell-inequality violation and multiparty quantum correlations for shared multisite quantum states. We believe that the relation is generic, as it is true for a number of different multisite measures that are defined from radically different perspectives. Precisely, we quantify the multisite-quantum-correlation content in the states by generalized geometric measure, a genuine multisite entanglement measure, as well as three monogamy-based multiparty-quantum-correlation measures, viz., 3-tangle, quantum-discord score, and quantum-work-deficit score. We find that generalized Greenberger-Horne-Zeilinger states and another single-parameter family of states, which we refer to as the special Greenberger-Horne-Zeilinger states, have the status of extremal states in such relations.
Electric Transport Traction Power Supply System With Distributed Energy Sources
NASA Astrophysics Data System (ADS)
Abramov, E. Y.; Schurov, N. I.; Rozhkova, M. V.
2016-04-01
The paper states the problem of traction substation (TSS) leveling of daily-load curve for urban electric transport. The circuit of traction power supply system (TPSS) with distributed autonomous energy source (AES) based on photovoltaic (PV) and energy storage (ES) units is submitted here. The distribution algorithm of power flow for the daily traction load curve leveling is also introduced in this paper. In addition, it illustrates the implemented experiment model of power supply system.
Efficient Implementation of the Pairing on Mobilephones Using BREW
NASA Astrophysics Data System (ADS)
Yoshitomi, Motoi; Takagi, Tsuyoshi; Kiyomoto, Shinsaku; Tanaka, Toshiaki
Pairing based cryptosystems can accomplish novel security applications such as ID-based cryptosystems, which have not been constructed efficiently without the pairing. The processing speed of the pairing based cryptosystems is relatively slow compared with the other conventional public key cryptosystems. However, several efficient algorithms for computing the pairing have been proposed, namely Duursma-Lee algorithm and its variant ηT pairing. In this paper, we present an efficient implementation of the pairing over some mobilephones. Moreover, we compare the processing speed of the pairing with that of the other standard public key cryptosystems, i. e. RSA cryptosystem and elliptic curve cryptosystem. Indeed the processing speed of our implementation in ARM9 processors on BREW achieves under 100 milliseconds using the supersingular curve over F397. In addition, the pairing is more efficient than the other public key cryptosystems, and the pairing can be achieved enough also on BREW mobilephones. It has become efficient enough to implement security applications, such as short signature, ID-based cryptosystems or broadcast encryption, using the pairing on BREW mobilephones.
NASA Technical Reports Server (NTRS)
Breininger, David R.
1988-01-01
The least bell's vireo (Vireo bellii pusillus) was listed in 1986 as an endangered species by the U.S. Fish and Wildlife Service. Because of the possibility of the species existing on Vandenberg Air Force Base (VAFB), this survey was conducted to determine if they exist, and if so to prepare a distribution map of the species on the base. Major riparian areas were surveyed on foot for 17 days in April, May, and July 1987. No least bell's vireo were sighted; based on past studies, it is unlikely that there is a significant population on VAFB. There are, however, at least 13 other species of special concern that inhabit VAFB riparian woodlands. Most of these species have declined along the south coast of Santa Barbara County, and many have declined in much of the southern half of California. Riparian areas on VAFB are an important environmental resource for the southern half of California; many of these areas, however, show signs of degradation.
Goel, Ruchi; Kishore, Divya; Nagpal, Smriti; Jain, Sparshi; Agarwal, Tushar
2017-01-01
Background: Recovery of Bell`s phenomenon after levator resection is unpredicatable. Delayed recovery can result in vision threatening corneal complications. Aim: To study the variability of Bell’s phenomenon and time taken for its recovery following levator resection for blepharoptosis and to correlate it with the amount of resection. Methods: A prospective observational study was conducted on 32 eyes of 32 patients diagnosed as unilateral simple congenital blepharoptosis who underwent levator resection at a tertiary care center between July 2013 and May 2015. Patients were followed up for 5 months and correction of ptosis, type of Bell`s, duration of Bell`s recovery and complications were noted. Results: The study group ranged from 16-25 years with 15:17 male: female ratio. There were 9 mild, 16 moderate and 7 severe ptosis. Satisfactory correction was achieved in all cases. Good Bell`s recovery occurred in 13 eyes on first post-op day, in 2-14 days in 19 eyes and 28 days in 1 eye. Inverse Bell`s was noted along with lid oedema and ecchymosis in 2 patients. Large resections (23-26mm) were associated with poor Bell`s on the first postoperative day (p=0.027, Fisher`s exact test). However, the duration required for recovery of Bell`s phenomenon did not show any significant difference with the amount of resection. (p=0.248, Mann Whitney test). Larger resections resulted in greater lagophthalmos (correlation=0.830, p<0.0001). Patients with recovery of Bell`s delayed for more than 7 days were associated with greater number of complications (p=0.001 Fisher`s Exact Test). Conclusion: Close monitoring for Bell`s recovery is required following levator resection. PMID:28584563
Investigation of hollow cathode performance for 30-cm thrusters
NASA Technical Reports Server (NTRS)
Mirtich, M. J.
1973-01-01
A parametric investigation of 6.35 mm diameter mercury hollow cathodes was carried out in a bell jar. The parameters that were varied were the amount of initial emissive mix, insert position, emission current, cathode temperature, orifice diameter, and mercury flow rate. Flow characteristic curves and performance as a function of time were obtained for the various cathodes. The results of a 3880 hr life test of a main cathode run at 15 amps emission current with no noticeable changes in keeper and collector voltages are also presented.
Nonlocality in many-body quantum systems detected with two-body correlators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tura, J., E-mail: jordi.tura@icfo.es; Augusiak, R.; Sainz, A.B.
Contemporary understanding of correlations in quantum many-body systems and in quantum phase transitions is based to a large extent on the recent intensive studies of entanglement in many-body systems. In contrast, much less is known about the role of quantum nonlocality in these systems, mostly because the available multipartite Bell inequalities involve high-order correlations among many particles, which are hard to access theoretically, and even harder experimentally. Standard, “theorist- and experimentalist-friendly” many-body observables involve correlations among only few (one, two, rarely three...) particles. Typically, there is no multipartite Bell inequality for this scenario based on such low-order correlations. Recently, however,more » we have succeeded in constructing multipartite Bell inequalities that involve two- and one-body correlations only, and showed how they revealed the nonlocality in many-body systems relevant for nuclear and atomic physics [Tura et al., Science 344 (2014) 1256]. With the present contribution we continue our work on this problem. On the one hand, we present a detailed derivation of the above Bell inequalities, pertaining to permutation symmetry among the involved parties. On the other hand, we present a couple of new results concerning such Bell inequalities. First, we characterize their tightness. We then discuss maximal quantum violations of these inequalities in the general case, and their scaling with the number of parties. Moreover, we provide new classes of two-body Bell inequalities which reveal nonlocality of the Dicke states—ground states of physically relevant and experimentally realizable Hamiltonians. Finally, we shortly discuss various scenarios for nonlocality detection in mesoscopic systems of trapped ions or atoms, and by atoms trapped in the vicinity of designed nanostructures.« less
TRMM rainfall estimative coupled with Bell (1969) methodology for extreme rainfall characterization
NASA Astrophysics Data System (ADS)
Schiavo Bernardi, E.; Allasia, D.; Basso, R.; Freitas Ferreira, P.; Tassi, R.
2015-06-01
The lack of rainfall data in Brazil, and, in particular, in Rio Grande do Sul State (RS), hinders the understanding of the spatial and temporal distribution of rainfall, especially in the case of the more complex extreme events. In this context, rainfall's estimation from remote sensors is seen as alternative to the scarcity of rainfall gauges. However, as they are indirect measures, such estimates needs validation. This paper aims to verify the applicability of the Tropical Rainfall Measuring Mission (TRMM) satellite information for extreme rainfall determination in RS. The analysis was accomplished at different temporal scales that ranged from 5 min to daily rainfall while spatial distribution of rainfall was investigated by means of regionalization. An initial test verified TRMM rainfall estimative against measured rainfall at gauges for 1998-2013 period considering different durations and return periods (RP). Results indicated that, for the RP of 2, 5, 10 and 15 years, TRMM overestimated on average 24.7% daily rainfall. As TRMM minimum time-steps is 3 h, in order to verify shorter duration rainfall, the TRMM data were adapted to fit Bell's (1969) generalized IDF formula (based on the existence of similarity between the mechanisms of extreme rainfall events as they are associated to convective cells). Bell`s equation error against measured precipitation was around 5-10%, which varied based on location, RP and duration while the coupled BELL+TRMM error was around 10-35%. However, errors were regionally distributed, allowing a correction to be implemented that reduced by half these values. These findings in turn permitted the use of TRMM+Bell estimates to improve the understanding of spatiotemporal distribution of extreme hydrological rainfall events.
Brunori, Paola; Masi, Piergiorgio; Faggiani, Luigi; Villani, Luciano; Tronchin, Michele; Galli, Claudio; Laube, Clarissa; Leoni, Antonella; Demi, Maila; La Gioia, Antonio
2011-04-11
Neonatal jaundice might lead to severe clinical consequences. Measurement of bilirubin in samples is interfered by hemolysis. Over a method-depending cut-off value of measured hemolysis, bilirubin value is not accepted and a new sample is required for evaluation although this is not always possible, especially with newborns and cachectic oncological patients. When usage of different methods, less prone to interferences, is not feasible an alternative recovery method for analytical significance of rejected data might help clinicians to take appropriate decisions. We studied the effects of hemolysis over total bilirubin measurement, comparing hemolysis-interfered bilirubin measurement with the non-interfered value. Interference curves were extrapolated over a wide range of bilirubin (0-30 mg/mL) and hemolysis (H Index 0-1100). Interference "altitude" curves were calculated and plotted. A bimodal acceptance table was calculated. Non-interfered bilirubin of given samples was calculated, by linear interpolation between the nearest lower and upper interference curves. Rejection of interference-sensitive data from hemolysed samples for every method should be based not upon the interferent concentration but upon a more complex algorithm based upon the concentration-dependent bimodal interaction between the interfered analyte and the measured interferent. The altitude-curve cartography approach to interfered assays may help laboratories to build up their own method-dependent algorithm and to improve the trueness of their data by choosing a cut-off value different from the one (-10% interference) proposed by manufacturers. When re-sampling or an alternative method is not available the altitude-curve cartography approach might also represent an alternative recovery method for analytical significance of rejected data. Copyright © 2011 Elsevier B.V. All rights reserved.
Information Systems Security Products and Services Catalogue.
1992-01-01
pricing information on the Motorola Portable DES Receiver Station and Portable DES Base Station, contact Motorola. The PX-300- S ranges in cost from...C2 Paul Smith (612) 482-2776 Tom Latterner (301) 220-3400 Jeffrey S . Bell (215) 986-6864 John Haggard (312) 714-7604 4-2d.2 GENERAL-PURPOSE...primary software security mechanism of the SCOMP system is the security kernel, based on the Center-approved Bell -LaPadula model of the software portion
Error Model and Compensation of Bell-Shaped Vibratory Gyro
Su, Zhong; Liu, Ning; Li, Qing
2015-01-01
A bell-shaped vibratory angular velocity gyro (BVG), inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement. PMID:26393593
The Season of Dorland-Bell: History of an Appalachian Mission School. Revised Second Edition.
ERIC Educational Resources Information Center
Painter, Jacqueline Burgin
This book details the history of the Dorland-Bell School, a residential school in rural western North Carolina. The book is based on letters, extensive interviews, and research about the school. In 1886, Luke and Juliette Dorland, Presbyterian missionaries and educators, retired to Hot Springs, North Carolina. However, at the request of residents…
Quantum Nonlocality and Reality
NASA Astrophysics Data System (ADS)
Bell, Mary; Gao, Shan
2016-09-01
Preface; Part I. John Stewart Bell: The Physicist: 1. John Bell: the Irish connection Andrew Whitaker; 2. Recollections of John Bell Michael Nauenberg; 3. John Bell: recollections of a great scientist and a great man Gian-Carlo Ghirardi; Part II. Bell's Theorem: 4. What did Bell really prove? Jean Bricmont; 5. The assumptions of Bell's proof Roderich Tumulka; 6. Bell on Bell's theorem: the changing face of nonlocality Harvey R. Brown and Christopher G. Timpson; 7. Experimental tests of Bell inequalities Marco Genovese; 8. Bell's theorem without inequalities: on the inception and scope of the GHZ theorem Olival Freire, Jr and Osvaldo Pessoa, Jr; 9. Strengthening Bell's theorem: removing the hidden-variable assumption Henry P. Stapp; Part III. Nonlocality: Illusions or Reality?: 10. Is any theory compatible with the quantum predictions necessarily nonlocal? Bernard d'Espagnat; 11. Local causality, probability and explanation Richard A. Healey; 12. Bell inequality and many-worlds interpretation Lev Vaidman; 13. Quantum solipsism and non-locality Travis Norsen; 14. Lessons of Bell's theorem: nonlocality, yes; action at a distance, not necessarily Wayne C. Myrvold; 15. Bell non-locality, Hardy's paradox and hyperplane dependence Gordon N. Fleming; 16. Some thoughts on quantum nonlocality and its apparent incompatibility with relativity Shan Gao; 17. A reasonable thing that just might work Daniel Rohrlich; 18. Weak values and quantum nonlocality Yakir Aharonov and Eliahu Cohen; Part IV. Nonlocal Realistic Theories: 19. Local beables and the foundations of physics Tim Maudlin; 20. John Bell's varying interpretations of quantum mechanics: memories and comments H. Dieter Zeh; 21. Some personal reflections on quantum non-locality and the contributions of John Bell Basil J. Hiley; 22. Bell on Bohm Sheldon Goldstein; 23. Interactions and inequality Philip Pearle; 24. Gravitation and the noise needed in objective reduction models Stephen L. Adler; 25. Towards an objective physics of Bell non-locality: palatial twistor theory Roger Penrose; 26. Measurement and macroscopicity: overcoming conceptual imprecision in quantum measurement theory Gregg Jaeger; Index.
Alagar, Ananda Giri Babu; Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu
2016-01-08
Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials.
2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT
NASA Astrophysics Data System (ADS)
Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.
2018-01-01
We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stobinska, M.; Institute for Theoretical Physics II, Erlangen-Nuernberg University, Erlangen; Sekatski, P.
Quantum correlations may violate the Bell inequalities. Most experimental schemes confirming this prediction have been realized in all-optical Bell tests suffering from the detection loophole. Experiments which simultaneously close this loophole and the locality loophole are highly desirable and remain challenging. An approach to loophole-free Bell tests is based on amplification of the entangled photons (i.e., on macroscopic entanglement), for which an optical signal should be easy to detect. However, the macroscopic states are partially indistinguishable by classical detectors. An interesting idea to overcome these limitations is to replace the postselection by an appropriate preselection immediately after the amplification. Thismore » is in the spirit of state preprocessing revealing hidden nonlocality. Here, we examine one of the possible preselections, but the presented tools can be used for analysis of other schemes. Filtering methods making the macroscopic entanglement useful for Bell tests and quantum protocols are the subject of an intensive study in the field nowadays.« less
Arican, Pinar; Dundar, Nihal Olgac; Gencpinar, Pinar; Cavusoglu, Dilek
2017-01-01
Bell's palsy is the most common cause of acute peripheral facial nerve paralysis, but the optimal dose of corticosteroids in pediatric patients is still unclear. This retrospective study aimed to evaluate the efficacy of low-dose corticosteroid therapy compared with high-dose corticosteroid therapy in children with Bell's palsy. Patients were divided into 2 groups based on the dose of oral prednisolone regimen initiated. The severity of idiopathic facial nerve paralysis was graded according to the House-Brackmann Grading Scale. The patients were re-assessed in terms of recovery rate at the first, third, and sixth months of treatment. There was no significant difference in complete recovery between the 2 groups after 1, 3, and 6 months of treatment. In our study, we concluded that even at a dose of 1 mg/kg/d, oral prednisolone was highly effective in the treatment of Bell's palsy in children.
A Two Dimensional Prediction of Solar Cycle 25
NASA Astrophysics Data System (ADS)
Munoz-Jaramillo, A.; Martens, P. C.
2017-12-01
To this date solar cycle most cycle predictions have focused on the forecast of solar cycle amplitude and cycle bell-curve shape. However, recent intriguing observational results suggest that all solar cycles follow the same longitudinal path regardless of their amplitude, and have a very similar decay once they reach a sufficient level of maturity. Cast in the light of our current understanding, these results suggest that the toroidal fields inside the Sun are subject to a very high turbulent diffusivity (of the order of magnitude of mixing-length estimates), and their equatorward propagation is driven by a steady meridional flow. Assuming this is the case, we will revisit the relationship between the polar fields at minimum and the amplitude of the next cycle and deliver a new generation of polar-field based predictions that include the depth of the minimum, as well as the latitude and time of the first active regions of solar cycle 25.
Visual perception of male body attractiveness.
Fan, J; Dai, W; Liu, F; Wu, J
2005-02-07
Based on 69 scanned Chinese male subjects and 25 Caucasian male subjects, the present study showed that the volume height index (VHI) is the most important visual cue to male body attractiveness of young Chinese viewers among the many body parameters examined in the study. VHI alone can explain ca. 73% of the variance of male body attractiveness ratings. The effect of VHI can be fitted with two half bell-shaped exponential curves with an optimal VHI at 17.6 l m(-2) and 18.0 l m(-2) for female raters and male raters, respectively. In addition to VHI, other body parameters or ratios can have small, but significant effects on male body attractiveness. Body proportions associated with fitness will enhance male body attractiveness. It was also found that there is an optimal waist-to-hip ratio (WHR) at 0.8 and deviations from this optimal WHR reduce male body attractiveness.
Automatic Mexico Gulf Oil Spill Detection from Radarsat-2 SAR Satellite Data Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Marghany, Maged
2016-10-01
In this work, a genetic algorithm is exploited for automatic detection of oil spills of small and large size. The route is achieved using arrays of RADARSAT-2 SAR ScanSAR Narrow single beam data obtained in the Gulf of Mexico. The study shows that genetic algorithm has automatically segmented the dark spot patches related to small and large oil spill pixels. This conclusion is confirmed by the receiveroperating characteristic (ROC) curve and ground data which have been documented. The ROC curve indicates that the existence of oil slick footprints can be identified with the area under the curve between the ROC curve and the no-discrimination line of 90%, which is greater than that of other surrounding environmental features. The small oil spill sizes represented 30% of the discriminated oil spill pixels in ROC curve. In conclusion, the genetic algorithm can be used as a tool for the automatic detection of oil spills of either small or large size and the ScanSAR Narrow single beam mode serves as an excellent sensor for oil spill patterns detection and surveying in the Gulf of Mexico.
Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem
NASA Astrophysics Data System (ADS)
Tangpatiphan, Kritsana; Yokoyama, Akihiko
This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.
Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits.
Gámez Serna, Citlalli; Ruichek, Yassine
2017-06-14
A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle's speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, 'curve analysis extraction' and 'speed limits database creation' are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger.
Ozcift, Akin; Gulten, Arif
2011-12-01
Improving accuracies of machine learning algorithms is vital in designing high performance computer-aided diagnosis (CADx) systems. Researches have shown that a base classifier performance might be enhanced by ensemble classification strategies. In this study, we construct rotation forest (RF) ensemble classifiers of 30 machine learning algorithms to evaluate their classification performances using Parkinson's, diabetes and heart diseases from literature. While making experiments, first the feature dimension of three datasets is reduced using correlation based feature selection (CFS) algorithm. Second, classification performances of 30 machine learning algorithms are calculated for three datasets. Third, 30 classifier ensembles are constructed based on RF algorithm to assess performances of respective classifiers with the same disease data. All the experiments are carried out with leave-one-out validation strategy and the performances of the 60 algorithms are evaluated using three metrics; classification accuracy (ACC), kappa error (KE) and area under the receiver operating characteristic (ROC) curve (AUC). Base classifiers succeeded 72.15%, 77.52% and 84.43% average accuracies for diabetes, heart and Parkinson's datasets, respectively. As for RF classifier ensembles, they produced average accuracies of 74.47%, 80.49% and 87.13% for respective diseases. RF, a newly proposed classifier ensemble algorithm, might be used to improve accuracy of miscellaneous machine learning algorithms to design advanced CADx systems. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Belle II silicon vertex detector
NASA Astrophysics Data System (ADS)
Adamczyk, K.; Aihara, H.; Angelini, C.; Aziz, T.; Babu, V.; Bacher, S.; Bahinipati, S.; Barberio, E.; Baroncelli, Ti.; Baroncelli, To.; Basith, A. K.; Batignani, G.; Bauer, A.; Behera, P. K.; Bergauer, T.; Bettarini, S.; Bhuyan, B.; Bilka, T.; Bosi, F.; Bosisio, L.; Bozek, A.; Buchsteiner, F.; Casarosa, G.; Ceccanti, M.; Červenkov, D.; Chendvankar, S. R.; Dash, N.; Divekar, S. T.; Doležal, Z.; Dutta, D.; Enami, K.; Forti, F.; Friedl, M.; Hara, K.; Higuchi, T.; Horiguchi, T.; Irmler, C.; Ishikawa, A.; Jeon, H. B.; Joo, C. W.; Kandra, J.; Kang, K. H.; Kato, E.; Kawasaki, T.; Kodyš, P.; Kohriki, T.; Koike, S.; Kolwalkar, M. M.; Kvasnička, P.; Lanceri, L.; Lettenbicher, J.; Maki, M.; Mammini, P.; Mayekar, S. N.; Mohanty, G. B.; Mohanty, S.; Morii, T.; Nakamura, K. R.; Natkaniec, Z.; Negishi, K.; Nisar, N. K.; Onuki, Y.; Ostrowicz, W.; Paladino, A.; Paoloni, E.; Park, H.; Pilo, F.; Profeti, A.; Rashevskaya, I.; Rao, K. K.; Rizzo, G.; Rozanska, M.; Sandilya, S.; Sasaki, J.; Sato, N.; Schultschik, S.; Schwanda, C.; Seino, Y.; Shimizu, N.; Stypula, J.; Suzuki, J.; Tanaka, S.; Tanida, K.; Taylor, G. N.; Thalmeier, R.; Thomas, R.; Tsuboyama, T.; Uozumi, S.; Urquijo, P.; Vitale, L.; Volpi, M.; Watanuki, S.; Watson, I. J.; Webb, J.; Wiechczynski, J.; Williams, S.; Würkner, B.; Yamamoto, H.; Yin, H.; Yoshinobu, T.; Belle II SVD Collaboration
2016-09-01
The Belle II experiment at the SuperKEKB collider in Japan is designed to indirectly probe new physics using approximately 50 times the data recorded by its predecessor. An accurate determination of the decay-point position of subatomic particles such as beauty and charm hadrons as well as a precise measurement of low-momentum charged particles will play a key role in this pursuit. These will be accomplished by an inner tracking device comprising two layers of pixelated silicon detector and four layers of silicon vertex detector based on double-sided microstrip sensors. We describe herein the design, prototyping and construction efforts of the Belle-II silicon vertex detector.
NASA Astrophysics Data System (ADS)
Diaz, Kristians; Castañeda, Benjamín; Miranda, César; Lavarello, Roberto; Llanos, Alejandro
2010-03-01
We developed a protocol for the acquisition of digital images and an algorithm for a color-based automatic segmentation of cutaneous lesions of Leishmaniasis. The protocol for image acquisition provides control over the working environment to manipulate brightness, lighting and undesirable shadows on the injury using indirect lighting. Also, this protocol was used to accurately calculate the area of the lesion expressed in mm2 even in curved surfaces by combining the information from two consecutive images. Different color spaces were analyzed and compared using ROC curves in order to determine the color layer with the highest contrast between the background and the wound. The proposed algorithm is composed of three stages: (1) Location of the wound determined by threshold and mathematical morphology techniques to the H layer of the HSV color space, (2) Determination of the boundaries of the wound by analyzing the color characteristics in the YIQ space based on masks (for the wound and the background) estimated from the first stage, and (3) Refinement of the calculations obtained on the previous stages by using the discrete dynamic contours algorithm. The segmented regions obtained with the algorithm were compared with manual segmentations made by a medical specialist. Broadly speaking, our results support that color provides useful information during segmentation and measurement of wounds of cutaneous Leishmaniasis. Results from ten images showed 99% specificity, 89% sensitivity, and 98% accuracy.
Huo, Ju; Zhang, Guiyang; Yang, Ming
2018-04-20
This paper is concerned with the anisotropic and non-identical gray distribution of feature points clinging to the curved surface, upon which a high precision and uncertainty-resistance algorithm for pose estimation is proposed. Weighted contribution of uncertainty to the objective function of feature points measuring error is analyzed. Then a novel error objective function based on the spatial collinear error is constructed by transforming the uncertainty into a covariance-weighted matrix, which is suitable for the practical applications. Further, the optimized generalized orthogonal iterative (GOI) algorithm is utilized for iterative solutions such that it avoids the poor convergence and significantly resists the uncertainty. Hence, the optimized GOI algorithm extends the field-of-view applications and improves the accuracy and robustness of the measuring results by the redundant information. Finally, simulation and practical experiments show that the maximum error of re-projection image coordinates of the target is less than 0.110 pixels. Within the space 3000 mm×3000 mm×4000 mm, the maximum estimation errors of static and dynamic measurement for rocket nozzle motion are superior to 0.065° and 0.128°, respectively. Results verify the high accuracy and uncertainty attenuation performance of the proposed approach and should therefore have potential for engineering applications.
Automated Assessment of Existing Patient's Revised Cardiac Risk Index Using Algorithmic Software.
Hofer, Ira S; Cheng, Drew; Grogan, Tristan; Fujimoto, Yohei; Yamada, Takashige; Beck, Lauren; Cannesson, Maxime; Mahajan, Aman
2018-05-25
Previous work in the field of medical informatics has shown that rules-based algorithms can be created to identify patients with various medical conditions; however, these techniques have not been compared to actual clinician notes nor has the ability to predict complications been tested. We hypothesize that a rules-based algorithm can successfully identify patients with the diseases in the Revised Cardiac Risk Index (RCRI). Patients undergoing surgery at the University of California, Los Angeles Health System between April 1, 2013 and July 1, 2016 and who had at least 2 previous office visits were included. For each disease in the RCRI except renal failure-congestive heart failure, ischemic heart disease, cerebrovascular disease, and diabetes mellitus-diagnosis algorithms were created based on diagnostic and standard clinical treatment criteria. For each disease state, the prevalence of the disease as determined by the algorithm, International Classification of Disease (ICD) code, and anesthesiologist's preoperative note were determined. Additionally, 400 American Society of Anesthesiologists classes III and IV cases were randomly chosen for manual review by an anesthesiologist. The sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the receiver operating characteristic curve were determined using the manual review as a gold standard. Last, the ability of the RCRI as calculated by each of the methods to predict in-hospital mortality was determined, and the time necessary to run the algorithms was calculated. A total of 64,151 patients met inclusion criteria for the study. In general, the incidence of definite or likely disease determined by the algorithms was higher than that detected by the anesthesiologist. Additionally, in all disease states, the prevalence of disease was always lowest for the ICD codes, followed by the preoperative note, followed by the algorithms. In the subset of patients for whom the records were manually reviewed, the algorithms were generally the most sensitive and the ICD codes the most specific. When computing the modified RCRI using each of the methods, the modified RCRI from the algorithms predicted in-hospital mortality with an area under the receiver operating characteristic curve of 0.70 (0.67-0.73), which compared to 0.70 (0.67-0.72) for ICD codes and 0.64 (0.61-0.67) for the preoperative note. On average, the algorithms took 12.64 ± 1.20 minutes to run on 1.4 million patients. Rules-based algorithms for disease in the RCRI can be created that perform with a similar discriminative ability as compared to physician notes and ICD codes but with significantly increased economies of scale.
Integration of Anatomic and Pathogenetic Bases for Early Lung Cancer Diagnosis
2007-03-01
transform Y(x; y), the coordinate of every pixel x = (x; y) in a uniform area (x; y) ∈A. η(xk; yk) is the surrounding curve of the area. The distance...is the labeled curve η Area A structuring element Figure 1: A fast algorithm for distance transform Figure 2: Three clustered cells (from left...Design Model”. Academic Radiology. 12(11): 1112-1123, 2006 [5]. Y.Zhang, R.Sankar and W.Qian, “Boundary Delineation in Transrectal Ultrasound
Joint Diagonalization Applied to the Detection and Discrimination of Unexploded Ordnance
2012-08-01
center (Das et al., 1990; Barrow and Nelson, 2001; Bell et al., 2001; Pasion and Oldenburg , 2001; Zhang et al., 2003; Smith and Mor- rison, 2004; Tarokh et...matrix for the complete transmitter/receiver ar- ray by tiling all the Nr × Nt available samples of expression 5: S ¼ GscUlΛ̇lUTl ðGprÞT...L. R., and D. W. Oldenburg , 2001, A discrimination algorithm for UXO using time-domain electromagnetics: Journal of Environmental and Engineering
NASA Astrophysics Data System (ADS)
Cui, Sheng; Qiu, Chen; Ke, Changjian; He, Sheng; Liu, Deming
2015-11-01
This paper presents a method which is able to monitor the chromatic dispersion (CD) and identify the modulation format (MF) of optical signals simultaneously. This method utilizes the features of the output curve of the highly sensitive all-optical CD monitor based on four wave mixing (FWM). From the symmetric center of the curve CD can be estimated blindly and independently, while from the profile and convergence region of the curve ten commonly used modulation formats can be recognized with simple algorithm based on maximum correlation classifier. This technique does not need any high speed optoelectronics and has no limitation on signal rate. Furthermore it can tolerate large CD distortions and is robust to polarization mode dispersion (PMD) and amplified spontaneous emission (ASE) noise.
Phi-s correlation and dynamic time warping - Two methods for tracking ice floes in SAR images
NASA Technical Reports Server (NTRS)
Mcconnell, Ross; Kober, Wolfgang; Kwok, Ronald; Curlander, John C.; Pang, Shirley S.
1991-01-01
The authors present two algorithms for performing shape matching on ice floe boundaries in SAR (synthetic aperture radar) images. These algorithms quickly produce a set of ice motion and rotation vectors that can be used to guide a pixel value correlator. The algorithms match a shape descriptor known as the Phi-s curve. The first algorithm uses normalized correlation to match the Phi-s curves, while the second uses dynamic programming to compute an elastic match that better accommodates ice floe deformation. Some empirical data on the performance of the algorithms on Seasat SAR images are presented.
Bit-Oriented Quantum Public-Key Cryptosystem Based on Bell States
NASA Astrophysics Data System (ADS)
Wu, WanQing; Cai, QingYu; Zhang, HuanGuo; Liang, XiaoYan
2018-02-01
Quantum public key encryption system provides information confidentiality using quantum mechanics. This paper presents a quantum public key cryptosystem (Q P K C) based on the Bell states. By H o l e v o's theorem, the presented scheme provides the security of the secret key using one-wayness during the QPKC. While the QPKC scheme is information theoretic security under chosen plaintext attack (C P A). Finally some important features of presented QPKC scheme can be compared with other QPKC scheme.
Bit-Oriented Quantum Public-Key Cryptosystem Based on Bell States
NASA Astrophysics Data System (ADS)
Wu, WanQing; Cai, QingYu; Zhang, HuanGuo; Liang, XiaoYan
2018-06-01
Quantum public key encryption system provides information confidentiality using quantum mechanics. This paper presents a quantum public key cryptosystem ( Q P K C) based on the Bell states. By H o l e v o' s theorem, the presented scheme provides the security of the secret key using one-wayness during the QPKC. While the QPKC scheme is information theoretic security under chosen plaintext attack ( C P A). Finally some important features of presented QPKC scheme can be compared with other QPKC scheme.
Observation, Sherlock Holmes, and Evidence Based Medicine.
Osborn, John
2002-01-01
Sir Arthur Conan Doyle, the creator of the fictional detective Sherlock Holmes, studied medicine at the University of Edinburgh between 1876 and 1881 under Doctor Joseph Bell who emphasised in his teaching the importance of observation, deduction and evidence. Sherlock Holmes was modelled on Joseph Bell. The modern notions of Evidence Based Medicine (EBM) are not new. A very brief indication of some of the history of EBM is presented including a discussion of the important and usually overlooked contribution of statisticians to the Popperian philosophy of EBM.
Turgeon, Ricky D; Wilby, Kyle J; Ensom, Mary H H
2015-06-01
We conducted a systematic review with meta-analysis to evaluate the efficacy of antiviral agents on complete recovery of Bell's palsy. We searched CENTRAL, Embase, MEDLINE, International Pharmaceutical Abstracts, and sources of unpublished literature to November 1, 2014. Primary and secondary outcomes were complete and satisfactory recovery, respectively. To evaluate statistical heterogeneity, we performed subgroup analysis of baseline severity of Bell's palsy and between-study sensitivity analyses based on risk of allocation and detection bias. The 10 included randomized controlled trials (2419 patients; 807 with severe Bell's palsy at onset) had variable risk of bias, with 9 trials having a high risk of bias in at least 1 domain. Complete recovery was not statistically significantly greater with antiviral use versus no antiviral use in the random-effects meta-analysis of 6 trials (relative risk, 1.06; 95% confidence interval, 0.97-1.16; I(2) = 65%). Conversely, random-effects meta-analysis of 9 trials showed a statistically significant difference in satisfactory recovery (relative risk, 1.10; 95% confidence interval, 1.02-1.18; I(2) = 63%). Response to antiviral agents did not differ visually or statistically between patients with severe symptoms at baseline and those with milder disease (test for interaction, P = .11). Sensitivity analyses did not show a clear effect of bias on outcomes. Antiviral agents are not efficacious in increasing the proportion of patients with Bell's palsy who achieved complete recovery, regardless of baseline symptom severity. Copyright © 2015 Elsevier Inc. All rights reserved.
Bell's theorem and the problem of decidability between the views of Einstein and Bohr.
Hess, K; Philipp, W
2001-12-04
Einstein, Podolsky, and Rosen (EPR) have designed a gedanken experiment that suggested a theory that was more complete than quantum mechanics. The EPR design was later realized in various forms, with experimental results close to the quantum mechanical prediction. The experimental results by themselves have no bearing on the EPR claim that quantum mechanics must be incomplete nor on the existence of hidden parameters. However, the well known inequalities of Bell are based on the assumption that local hidden parameters exist and, when combined with conflicting experimental results, do appear to prove that local hidden parameters cannot exist. This fact leaves only instantaneous actions at a distance (called "spooky" by Einstein) to explain the experiments. The Bell inequalities are based on a mathematical model of the EPR experiments. They have no experimental confirmation, because they contradict the results of all EPR experiments. In addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions; for instance, he assumes that the hidden parameters are governed by a single probability measure independent of the analyzer settings. We argue that the mathematical model of Bell excludes a large set of local hidden variables and a large variety of probability densities. Our set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does permit derivation of the quantum result and is consistent with all known experiments.
Investigating prior probabilities in a multiple hypothesis test for use in space domain awareness
NASA Astrophysics Data System (ADS)
Hardy, Tyler J.; Cain, Stephen C.
2016-05-01
The goal of this research effort is to improve Space Domain Awareness (SDA) capabilities of current telescope systems through improved detection algorithms. Ground-based optical SDA telescopes are often spatially under-sampled, or aliased. This fact negatively impacts the detection performance of traditionally proposed binary and correlation-based detection algorithms. A Multiple Hypothesis Test (MHT) algorithm has been previously developed to mitigate the effects of spatial aliasing. This is done by testing potential Resident Space Objects (RSOs) against several sub-pixel shifted Point Spread Functions (PSFs). A MHT has been shown to increase detection performance for the same false alarm rate. In this paper, the assumption of a priori probability used in a MHT algorithm is investigated. First, an analysis of the pixel decision space is completed to determine alternate hypothesis prior probabilities. These probabilities are then implemented into a MHT algorithm, and the algorithm is then tested against previous MHT algorithms using simulated RSO data. Results are reported with Receiver Operating Characteristic (ROC) curves and probability of detection, Pd, analysis.
Education as a Practice of Freedom: Reflections on bell hooks
ERIC Educational Resources Information Center
Specia, Akello; Osman, Ahmed A.
2015-01-01
This paper critically analyses the conceptions of bell hooks on education. It focuses on the relevance of hook's ideas to the classroom. It is a theoretical paper based on secondary data that seeks to contribute to the growing body of knowledge in education. The paper is a reflection of hook's reaction to education as a practice of freedom, the…
Digital signal processing at Bell Labs-Foundations for speech and acoustics research
NASA Astrophysics Data System (ADS)
Rabiner, Lawrence R.
2004-05-01
Digital signal processing (DSP) is a fundamental tool for much of the research that has been carried out of Bell Labs in the areas of speech and acoustics research. The fundamental bases for DSP include the sampling theorem of Nyquist, the method for digitization of analog signals by Shannon et al., methods of spectral analysis by Tukey, the cepstrum by Bogert et al., and the FFT by Tukey (and Cooley of IBM). Essentially all of these early foundations of DSP came out of the Bell Labs Research Lab in the 1930s, 1940s, 1950s, and 1960s. This fundamental research was motivated by fundamental applications (mainly in the areas of speech, sonar, and acoustics) that led to novel design methods for digital filters (Kaiser, Golden, Rabiner, Schafer), spectrum analysis methods (Rabiner, Schafer, Allen, Crochiere), fast convolution methods based on the FFT (Helms, Bergland), and advanced digital systems used to implement telephony channel banks (Jackson, McDonald, Freeny, Tewksbury). This talk summarizes the key contributions to DSP made at Bell Labs, and illustrates how DSP was utilized in the areas of speech and acoustics research. It also shows the vast, worldwide impact of this DSP research on modern consumer electronics.
Bell's palsy before Bell: Cornelis Stalpart van der Wiel's observation of Bell's palsy in 1683.
van de Graaf, Robert C; Nicolai, Jean-Philippe A
2005-11-01
Bell's palsy is named after Sir Charles Bell (1774-1842), who has long been considered to be the first to describe idiopathic facial paralysis in the early 19th century. However, it was discovered that Nicolaus Anton Friedreich (1761-1836) and James Douglas (1675-1742) preceded him in the 18th century. Recently, an even earlier account of Bell's palsy was found, as observed by Cornelis Stalpart van der Wiel (1620-1702) from The Hague, The Netherlands in 1683. Because our current knowledge of the history of Bell's palsy before Bell is limited to a few documents, it is interesting to discuss Stalpart van der Wiel's description and determine its additional value for the history of Bell's palsy. It is concluded that Cornelis Stalpart van der Wiel was the first to record Bell's palsy in 1683. His manuscript provides clues for future historical research.
NASA Technical Reports Server (NTRS)
Michal, David H.
1950-01-01
An investigation of the static and dynamic longitudinal stability characteristics of 1/3.7 scale rocket-powered model of the Bell MX-776A has been made for a Mach number range from 0.8 to 1.6. Two models were tested with all control surfaces at 0 degree deflection and centers of gravity located 1/4 and 1/2 body diameters, respectively, ahead of the equivalent design location. Both models were stable about the trim conditions but did not trim at 0 degree angle of attack because of slight constructional asymmetries. The results indicated that the variation of lift and pitching moment was not linear with angle of attack. Both lift-curve slope and pitching-moment-curve slope were of the smallest magnitude near 0 degree angle of attack. In general, an increase in angle of attack was accompanied by a rearward movement of the aerodynamic center as the rear wing moved out of the downwash from the forward surfaces. This characteristic was more pronounced in the transonic region. The dynamic stability in the form of total damping factor varied with normal-force coefficient but was greatest for both models at a Mach number of approximately 1.25. The damping factor was greater at the lower trim normal-force coefficients except at a Mach number of 1.0. At that speed the damping factor was of about the same magnitude for both models. The drag coefficient increased with trim normal-force coefficient and was largest in the transonic region.
An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves
Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing
2014-01-01
Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. PMID:24404181
NASA Astrophysics Data System (ADS)
Huang, Huan; Baddour, Natalie; Liang, Ming
2018-02-01
Under normal operating conditions, bearings often run under time-varying rotational speed conditions. Under such circumstances, the bearing vibrational signal is non-stationary, which renders ineffective the techniques used for bearing fault diagnosis under constant running conditions. One of the conventional methods of bearing fault diagnosis under time-varying speed conditions is resampling the non-stationary signal to a stationary signal via order tracking with the measured variable speed. With the resampled signal, the methods available for constant condition cases are thus applicable. However, the accuracy of the order tracking is often inadequate and the time-varying speed is sometimes not measurable. Thus, resampling-free methods are of interest for bearing fault diagnosis under time-varying rotational speed for use without tachometers. With the development of time-frequency analysis, the time-varying fault character manifests as curves in the time-frequency domain. By extracting the Instantaneous Fault Characteristic Frequency (IFCF) from the Time-Frequency Representation (TFR) and converting the IFCF, its harmonics, and the Instantaneous Shaft Rotational Frequency (ISRF) into straight lines, the bearing fault can be detected and diagnosed without resampling. However, so far, the extraction of the IFCF for bearing fault diagnosis is mostly based on the assumption that at each moment the IFCF has the highest amplitude in the TFR, which is not always true. Hence, a more reliable T-F curve extraction approach should be investigated. Moreover, if the T-F curves including the IFCF, its harmonic, and the ISRF can be all extracted from the TFR directly, no extra processing is needed for fault diagnosis. Therefore, this paper proposes an algorithm for multiple T-F curve extraction from the TFR based on a fast path optimization which is more reliable for T-F curve extraction. Then, a new procedure for bearing fault diagnosis under unknown time-varying speed conditions is developed based on the proposed algorithm and a new fault diagnosis strategy. The average curve-to-curve ratios are utilized to describe the relationship of the extracted curves and fault diagnosis can then be achieved by comparing the ratios to the fault characteristic coefficients. The effectiveness of the proposed method is validated by simulated and experimental signals.
NASA Astrophysics Data System (ADS)
Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.
2017-03-01
Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.
[Lithology feature extraction of CASI hyperspectral data based on fractal signal algorithm].
Tang, Chao; Chen, Jian-Ping; Cui, Jing; Wen, Bo-Tao
2014-05-01
Hyperspectral data is characterized by combination of image and spectrum and large data volume dimension reduction is the main research direction. Band selection and feature extraction is the primary method used for this objective. In the present article, the authors tested methods applied for the lithology feature extraction from hyperspectral data. Based on the self-similarity of hyperspectral data, the authors explored the application of fractal algorithm to lithology feature extraction from CASI hyperspectral data. The "carpet method" was corrected and then applied to calculate the fractal value of every pixel in the hyperspectral data. The results show that fractal information highlights the exposed bedrock lithology better than the original hyperspectral data The fractal signal and characterized scale are influenced by the spectral curve shape, the initial scale selection and iteration step. At present, research on the fractal signal of spectral curve is rare, implying the necessity of further quantitative analysis and investigation of its physical implications.
Freiman, Moti; Nickisch, Hannes; Prevrhal, Sven; Schmitt, Holger; Vembar, Mani; Maurovich-Horvat, Pál; Donnelly, Patrick; Goshen, Liran
2017-03-01
The goal of this study was to assess the potential added benefit of accounting for partial volume effects (PVE) in an automatic coronary lumen segmentation algorithm that is used to determine the hemodynamic significance of a coronary artery stenosis from coronary computed tomography angiography (CCTA). Two sets of data were used in our work: (a) multivendor CCTA datasets of 18 subjects from the MICCAI 2012 challenge with automatically generated centerlines and 3 reference segmentations of 78 coronary segments and (b) additional CCTA datasets of 97 subjects with 132 coronary lesions that had invasive reference standard FFR measurements. We extracted the coronary artery centerlines for the 97 datasets by an automated software program followed by manual correction if required. An automatic machine-learning-based algorithm segmented the coronary tree with and without accounting for the PVE. We obtained CCTA-based FFR measurements using a flow simulation in the coronary trees that were generated by the automatic algorithm with and without accounting for PVE. We assessed the potential added value of PVE integration as a part of the automatic coronary lumen segmentation algorithm by means of segmentation accuracy using the MICCAI 2012 challenge framework and by means of flow simulation overall accuracy, sensitivity, specificity, negative and positive predictive values, and the receiver operated characteristic (ROC) area under the curve. We also evaluated the potential benefit of accounting for PVE in automatic segmentation for flow simulation for lesions that were diagnosed as obstructive based on CCTA which could have indicated a need for an invasive exam and revascularization. Our segmentation algorithm improves the maximal surface distance error by ~39% compared to previously published method on the 18 datasets from the MICCAI 2012 challenge with comparable Dice and mean surface distance. Results with and without accounting for PVE were comparable. In contrast, integrating PVE analysis into an automatic coronary lumen segmentation algorithm improved the flow simulation specificity from 0.6 to 0.68 with the same sensitivity of 0.83. Also, accounting for PVE improved the area under the ROC curve for detecting hemodynamically significant CAD from 0.76 to 0.8 compared to automatic segmentation without PVE analysis with invasive FFR threshold of 0.8 as the reference standard. Accounting for PVE in flow simulation to support the detection of hemodynamic significant disease in CCTA-based obstructive lesions improved specificity from 0.51 to 0.73 with same sensitivity of 0.83 and the area under the curve from 0.69 to 0.79. The improvement in the AUC was statistically significant (N = 76, Delong's test, P = 0.012). Accounting for the partial volume effects in automatic coronary lumen segmentation algorithms has the potential to improve the accuracy of CCTA-based hemodynamic assessment of coronary artery lesions. © 2017 American Association of Physicists in Medicine.
Alexander Graham Bell: Teacher of the Deaf.
ERIC Educational Resources Information Center
Bruce, Robert V.
The lecture on Alexander Graham Bell by Dr. Robert V. Bruce, the author of a biography of Bell, focuses on Bell's association with the Clarke School for the Deaf in Massachusetts. Noted are Bell's employment by the school at 25 years of age and the preceding period during which Bell taught elocution at a boys' school in Scotland and used his…
Videodensitometric Methods for Cardiac Output Measurements
NASA Astrophysics Data System (ADS)
Mischi, Massimo; Kalker, Ton; Korsten, Erik
2003-12-01
Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.
NASA Astrophysics Data System (ADS)
Hsiao, Feng-Hsiag
2017-10-01
In order to obtain double encryption via elliptic curve cryptography (ECC) and chaotic synchronisation, this study presents a design methodology for neural-network (NN)-based secure communications in multiple time-delay chaotic systems. ECC is an asymmetric encryption and its strength is based on the difficulty of solving the elliptic curve discrete logarithm problem which is a much harder problem than factoring integers. Because it is much harder, we can get away with fewer bits to provide the same level of security. To enhance the strength of the cryptosystem, we conduct double encryption that combines chaotic synchronisation with ECC. According to the improved genetic algorithm, a fuzzy controller is synthesised to realise the exponential synchronisation and achieves optimal H∞ performance by minimising the disturbances attenuation level. Finally, a numerical example with simulations is given to demonstrate the effectiveness of the proposed approach.
Sun, Wenqing; Zheng, Bin; Qian, Wei
2017-10-01
This study aimed to analyze the ability of extracting automatically generated features using deep structured algorithms in lung nodule CT image diagnosis, and compare its performance with traditional computer aided diagnosis (CADx) systems using hand-crafted features. All of the 1018 cases were acquired from Lung Image Database Consortium (LIDC) public lung cancer database. The nodules were segmented according to four radiologists' markings, and 13,668 samples were generated by rotating every slice of nodule images. Three multichannel ROI based deep structured algorithms were designed and implemented in this study: convolutional neural network (CNN), deep belief network (DBN), and stacked denoising autoencoder (SDAE). For the comparison purpose, we also implemented a CADx system using hand-crafted features including density features, texture features and morphological features. The performance of every scheme was evaluated by using a 10-fold cross-validation method and an assessment index of the area under the receiver operating characteristic curve (AUC). The observed highest area under the curve (AUC) was 0.899±0.018 achieved by CNN, which was significantly higher than traditional CADx with the AUC=0.848±0.026. The results from DBN was also slightly higher than CADx, while SDAE was slightly lower. By visualizing the automatic generated features, we found some meaningful detectors like curvy stroke detectors from deep structured schemes. The study results showed the deep structured algorithms with automatically generated features can achieve desirable performance in lung nodule diagnosis. With well-tuned parameters and large enough dataset, the deep learning algorithms can have better performance than current popular CADx. We believe the deep learning algorithms with similar data preprocessing procedure can be used in other medical image analysis areas as well. Copyright © 2017. Published by Elsevier Ltd.
Fong, Youyi; Yu, Xuesong
2016-01-01
Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502
Combing VFH with bezier for motion planning of an autonomous vehicle
NASA Astrophysics Data System (ADS)
Ye, Feng; Yang, Jing; Ma, Chao; Rong, Haijun
2017-08-01
Vector Field Histogram (VFH) is a method for mobile robot obstacle avoidance. However, due to the nonholonomic constraints of the vehicle, the algorithm is seldom applied to autonomous vehicles. Especially when we expect the vehicle to reach target location in a certain direction, the algorithm is often unsatisfactory. Fortunately, the Bezier Curve is defined by the states of the starting point and the target point. We can use this feature to make the vehicle in the expected direction. Therefore, we propose an algorithm to combine the Bezier Curve with the VFH algorithm, to search for the collision-free states with the VFH search method, and to select the optimal trajectory point with the Bezier Curve as the reference line. This means that we will improve the cost function in the VFH algorithm by comparing the distance between candidate directions and reference line. Finally, select the closest direction to the reference line to be the optimal motion direction.
A hyperbolastic type-I diffusion process: Parameter estimation by means of the firefly algorithm.
Barrera, Antonio; Román-Román, Patricia; Torres-Ruiz, Francisco
2018-01-01
A stochastic diffusion process, whose mean function is a hyperbolastic curve of type I, is presented. The main characteristics of the process are studied and the problem of maximum likelihood estimation for the parameters of the process is considered. To this end, the firefly metaheuristic optimization algorithm is applied after bounding the parametric space by a stagewise procedure. Some examples based on simulated sample paths and real data illustrate this development. Copyright © 2017 Elsevier B.V. All rights reserved.
Self-Avoiding Walks Over Adaptive Triangular Grids
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1999-01-01
Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.
Clinical practice guideline: Bell's palsy.
Baugh, Reginald F; Basura, Gregory J; Ishii, Lisa E; Schwartz, Seth R; Drumheller, Caitlin Murray; Burkholder, Rebecca; Deckard, Nathan A; Dawson, Cindy; Driscoll, Colin; Gillespie, M Boyd; Gurgel, Richard K; Halperin, John; Khalid, Ayesha N; Kumar, Kaparaboyna Ashok; Micco, Alan; Munsell, Debra; Rosenbaum, Steven; Vaughan, William
2013-11-01
Bell's palsy, named after the Scottish anatomist, Sir Charles Bell, is the most common acute mono-neuropathy, or disorder affecting a single nerve, and is the most common diagnosis associated with facial nerve weakness/paralysis. Bell's palsy is a rapid unilateral facial nerve paresis (weakness) or paralysis (complete loss of movement) of unknown cause. The condition leads to the partial or complete inability to voluntarily move facial muscles on the affected side of the face. Although typically self-limited, the facial paresis/paralysis that occurs in Bell's palsy may cause significant temporary oral incompetence and an inability to close the eyelid, leading to potential eye injury. Additional long-term poor outcomes do occur and can be devastating to the patient. Treatments are generally designed to improve facial function and facilitate recovery. There are myriad treatment options for Bell's palsy, and some controversy exists regarding the effectiveness of several of these options, and there are consequent variations in care. In addition, numerous diagnostic tests available are used in the evaluation of patients with Bell's palsy. Many of these tests are of questionable benefit in Bell's palsy. Furthermore, while patients with Bell's palsy enter the health care system with facial paresis/paralysis as a primary complaint, not all patients with facial paresis/paralysis have Bell's palsy. It is a concern that patients with alternative underlying etiologies may be misdiagnosed or have unnecessary delay in diagnosis. All of these quality concerns provide an important opportunity for improvement in the diagnosis and management of patients with Bell's palsy. The primary purpose of this guideline is to improve the accuracy of diagnosis for Bell's palsy, to improve the quality of care and outcomes for patients with Bell's palsy, and to decrease harmful variations in the evaluation and management of Bell's palsy. This guideline addresses these needs by encouraging accurate and efficient diagnosis and treatment and, when applicable, facilitating patient follow-up to address the management of long-term sequelae or evaluation of new or worsening symptoms not indicative of Bell's palsy. The guideline is intended for all clinicians in any setting who are likely to diagnose and manage patients with Bell's palsy. The target population is inclusive of both adults and children presenting with Bell's palsy. ACTION STATEMENTS: The development group made a strong recommendation that (a) clinicians should assess the patient using history and physical examination to exclude identifiable causes of facial paresis or paralysis in patients presenting with acute-onset unilateral facial paresis or paralysis, (b) clinicians should prescribe oral steroids within 72 hours of symptom onset for Bell's palsy patients 16 years and older, (c) clinicians should not prescribe oral antiviral therapy alone for patients with new-onset Bell's palsy, and (d) clinicians should implement eye protection for Bell's palsy patients with impaired eye closure. The panel made recommendations that (a) clinicians should not obtain routine laboratory testing in patients with new-onset Bell's palsy, (b) clinicians should not routinely perform diagnostic imaging for patients with new-onset Bell's palsy, (c) clinicians should not perform electrodiagnostic testing in Bell's palsy patients with incomplete facial paralysis, and (d) clinicians should reassess or refer to a facial nerve specialist those Bell's palsy patients with (1) new or worsening neurologic findings at any point, (2) ocular symptoms developing at any point, or (3) incomplete facial recovery 3 months after initial symptom onset. The development group provided the following options: (a) clinicians may offer oral antiviral therapy in addition to oral steroids within 72 hours of symptom onset for patients with Bell's palsy, and (b) clinicians may offer electrodiagnostic testing to Bell's palsy patients with complete facial paralysis. The development group offered the following no recommendations: (a) no recommendation can be made regarding surgical decompression for patients with Bell's palsy, (b) no recommendation can be made regarding the effect of acupuncture in patients with Bell's palsy, and (c) no recommendation can be made regarding the effect of physical therapy in patients with Bell's palsy.
Event-Ready Bell Test Using Entangled Atoms Simultaneously Closing Detection and Locality Loopholes
NASA Astrophysics Data System (ADS)
Rosenfeld, Wenjamin; Burchardt, Daniel; Garthoff, Robert; Redeker, Kai; Ortegel, Norbert; Rau, Markus; Weinfurter, Harald
2017-07-01
An experimental test of Bell's inequality allows ruling out any local-realistic description of nature by measuring correlations between distant systems. While such tests are conceptually simple, there are strict requirements concerning the detection efficiency of the involved measurements, as well as the enforcement of spacelike separation between the measurement events. Only very recently could both loopholes be closed simultaneously. Here we present a statistically significant, event-ready Bell test based on combining heralded entanglement of atoms separated by 398 m with fast and efficient measurements of the atomic spin states closing essential loopholes. We obtain a violation with S =2.221 ±0.033 (compared to the maximal value of 2 achievable with models based on local hidden variables) which allows us to refute the hypothesis of local realism with a significance level P <2.57 ×10-9.
Enhancing Sparsity by Reweighted l(1) Minimization
2008-07-01
recovery depends on the sparsity level k. The dashed curves represent a reweighted ℓ1 algorithm that outperforms the traditional unweighted ℓ1...approach (solid curve ). (a) Performance after 4 reweighting iterations as a function of ǫ. (b) Performance with fixed ǫ = 0.1 as a function of the number of...signal recovery (declared when ‖x0 − x‖ℓ∞ ≤ 10−3) for the unweighted ℓ1 algorithm as a function of the sparsity level k. The dashed curves represent the
Photometric Supernova Classification with Machine Learning
NASA Astrophysics Data System (ADS)
Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.
2016-08-01
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.
Family of nonlocal bound entangled states
NASA Astrophysics Data System (ADS)
Yu, Sixia; Oh, C. H.
2017-03-01
Bound entanglement, being entangled yet not distillable, is essential to our understanding of the relations between nonlocality and entanglement besides its applications in certain quantum information tasks. Recently, bound entangled states that violate a Bell inequality have been constructed for a two-qutrit system, disproving a conjecture by Peres that bound entanglement is local. Here we construct this kind of nonlocal bound entangled state for all finite dimensions larger than two, making possible their experimental demonstration in most general systems. We propose a Bell inequality, based on a Hardy-type argument for nonlocality, and a steering inequality to identify their nonlocality. We also provide a family of entanglement witnesses to detect their entanglement beyond the Bell inequality and the steering inequality.
Maneuver Acoustic Flight Test of the Bell 430 Helicopter Data Report
NASA Technical Reports Server (NTRS)
Watts, Michael E.; Greenwood, Eric; Smith, Charles D.; Snider, Royce; Conner, David A.
2014-01-01
A cooperative ight test by NASA, Bell Helicopter and the U.S. Army to characterize the steady state acoustics and measure the maneuver noise of a Bell Helicopter 430 aircraft was accomplished. The test occurred during June/July 2011 at Eglin Air Force Base, Florida. This test gathered a total of 410 test points over 10 test days and compiled an extensive database of dynamic maneuver measurements. Three microphone arrays with up to 31 microphon. es in each were used to acquire acoustic data. Aircraft data included Differential Global Positioning System, aircraft state and rotor state information. This paper provides an overview of the test and documents the data acquired.
Algorithm for automatic forced spirometry quality assessment: technological developments.
Melia, Umberto; Burgos, Felip; Vallverdú, Montserrat; Velickovski, Filip; Lluch-Ariet, Magí; Roca, Josep; Caminal, Pere
2014-01-01
We hypothesized that the implementation of automatic real-time assessment of quality of forced spirometry (FS) may significantly enhance the potential for extensive deployment of a FS program in the community. Recent studies have demonstrated that the application of quality criteria defined by the ATS/ERS (American Thoracic Society/European Respiratory Society) in commercially available equipment with automatic quality assessment can be markedly improved. To this end, an algorithm for assessing quality of FS automatically was reported. The current research describes the mathematical developments of the algorithm. An innovative analysis of the shape of the spirometric curve, adding 23 new metrics to the traditional 4 recommended by ATS/ERS, was done. The algorithm was created through a two-step iterative process including: (1) an initial version using the standard FS curves recommended by the ATS; and, (2) a refined version using curves from patients. In each of these steps the results were assessed against one expert's opinion. Finally, an independent set of FS curves from 291 patients was used for validation purposes. The novel mathematical approach to characterize the FS curves led to appropriate FS classification with high specificity (95%) and sensitivity (96%). The results constitute the basis for a successful transfer of FS testing to non-specialized professionals in the community.
Panda, Rashmi; Puhan, N B; Panda, Ganapati
2018-02-01
Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.
Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs.
Wang, Shaoze; Jin, Kai; Lu, Haitong; Cheng, Chuming; Ye, Juan; Qian, Dahong
2016-04-01
Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair generic quality that would be especially useful to assist inexperienced individuals in collecting meaningful and interpretable data with consistency. The algorithm is based on three characteristics of the human visual system--multi-channel sensation, just noticeable blur, and the contrast sensitivity function to detect illumination and color distortion, blur, and low contrast distortion, respectively. A total of 536 retinal images, 280 from proprietary databases and 256 from public databases, were graded independently by one senior and two junior ophthalmologists, such that three partial measures of quality and generic overall quality were classified into two categories. Binary classification was implemented by the support vector machine and the decision tree, and receiver operating characteristic (ROC) curves were obtained and plotted to analyze the performance of the proposed algorithm. The experimental results revealed that the generic overall quality classification achieved a sensitivity of 87.45% at a specificity of 91.66%, with an area under the ROC curve of 0.9452, indicating the value of applying the algorithm, which is based on the human vision system, to assess the image quality of non-mydriatic photography, especially for low-cost ophthalmological telemedicine applications.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Wong, Carlos K H; Siu, Shing-Chung; Wan, Eric Y F; Jiao, Fang-Fang; Yu, Esther Y T; Fung, Colman S C; Wong, Ka-Wai; Leung, Angela Y M; Lam, Cindy L K
2016-05-01
The aim of the present study was to develop a simple nomogram that can be used to predict the risk of diabetes mellitus (DM) in the asymptomatic non-diabetic subjects based on non-laboratory- and laboratory-based risk algorithms. Anthropometric data, plasma fasting glucose, full lipid profile, exercise habits, and family history of DM were collected from Chinese non-diabetic subjects aged 18-70 years. Logistic regression analysis was performed on a random sample of 2518 subjects to construct non-laboratory- and laboratory-based risk assessment algorithms for detection of undiagnosed DM; both algorithms were validated on data of the remaining sample (n = 839). The Hosmer-Lemeshow test and area under the receiver operating characteristic (ROC) curve (AUC) were used to assess the calibration and discrimination of the DM risk algorithms. Of 3357 subjects recruited, 271 (8.1%) had undiagnosed DM defined by fasting glucose ≥7.0 mmol/L or 2-h post-load plasma glucose ≥11.1 mmol/L after an oral glucose tolerance test. The non-laboratory-based risk algorithm, with scores ranging from 0 to 33, included age, body mass index, family history of DM, regular exercise, and uncontrolled blood pressure; the laboratory-based risk algorithm, with scores ranging from 0 to 37, added triglyceride level to the risk factors. Both algorithms demonstrated acceptable calibration (Hosmer-Lemeshow test: P = 0.229 and P = 0.483) and discrimination (AUC 0.709 and 0.711) for detection of undiagnosed DM. A simple-to-use nomogram for detecting undiagnosed DM has been developed using validated non-laboratory-based and laboratory-based risk algorithms. © 2015 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
Reisner, Andrew T; Chen, Liangyou; McKenna, Thomas M; Reifman, Jaques
2008-10-01
Prehospital severity scores can be used in routine prehospital care, mass casualty care, and military triage. If computers could reliably calculate clinical scores, new clinical and research methodologies would be possible. One obstacle is that vital signs measured automatically can be unreliable. We hypothesized that Signal Quality Indices (SQI's), computer algorithms that differentiate between reliable and unreliable monitored physiologic data, could improve the predictive power of computer-calculated scores. In a retrospective analysis of trauma casualties transported by air ambulance, we computed the Triage Revised Trauma Score (RTS) from archived travel monitor data. We compared the areas-under-the-curve (AUC's) of receiver operating characteristic curves for prediction of mortality and red blood cell transfusion for 187 subjects with comparable quantities of good-quality and poor-quality data. Vital signs deemed reliable by SQI's led to significantly more discriminatory severity scores than vital signs deemed unreliable. We also compared automatically-computed RTS (using the SQI's) versus RTS computed from vital signs documented by medics. For the subjects in whom the SQI algorithms identified 15 consecutive seconds of reliable vital signs data (n = 350), the automatically-computed scores' AUC's were the same as the medic-based scores' AUC's. Using the Prehospital Index in place of RTS led to very similar results, corroborating our findings. SQI algorithms improve automatically-computed severity scores, and automatically-computed scores using SQI's are equivalent to medic-based scores.
... español Parálisis de Bell What Is Bell's Palsy? Bell's palsy is a temporary weakness or paralysis of the muscles on one side of the ... of your body. Some other conditions can cause paralysis that's more serious than Bell's palsy. Tell the doctor if you are having ...
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
A modified CoRoT detrend algorithm and the discovery of a new planetary companion
NASA Astrophysics Data System (ADS)
Boufleur, Rodrigo C.; Emilio, Marcelo; Janot-Pacheco, Eduardo; Andrade, Laerte; Ferraz-Mello, Sylvio; do Nascimento, José-Dias, Jr.; de La Reza, Ramiro
2018-01-01
We present MCDA, a modification of the COnvection ROtation and planetary Transits (CoRoT) detrend algorithm (CDA) suitable to detrend chromatic light curves. By means of robust statistics and better handling of short-term variability, the implementation decreases the systematic light-curve variations and improves the detection of exoplanets when compared with the original algorithm. All CoRoT chromatic light curves (a total of 65 655) were analysed with our algorithm. Dozens of new transit candidates and all previously known CoRoT exoplanets were rediscovered in those light curves using a box-fitting algorithm. For three of the new cases, spectroscopic measurements of the candidates' host stars were retrieved from the ESO Science Archive Facility and used to calculate stellar parameters and, in the best cases, radial velocities. In addition to our improved detrend technique, we announce the discovery of a planet that orbits a 0.79_{-0.09}^{+0.08} R⊙ star with a period of 6.718 37 ± 0.000 01 d and has 0.57_{-0.05}^{+0.06} RJ and 0.15 ± 0.10 MJ. We also present the analysis of two cases in which parameters found suggest the existence of possible planetary companions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yelton, John
The project involved data analysis of data taken with the Belle detector operating at KEKB accelerator, Japan. In addition commissionin of the Belle II detector, which is destined to replace the Belle detector.
77 FR 23388 - Airworthiness Directives; Bell Helicopter Textron Canada Limited Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-19
... Airworthiness Directives; Bell Helicopter Textron Canada Limited Helicopters AGENCY: Federal Aviation... are publishing a new airworthiness directive (AD) for Bell Helicopter Textron Canada Limited (Bell..., contact Bell Helicopter Textron Canada Limited, 12,800 Rue de l'Avenir, Mirabel, Quebec J7J1R4, telephone...
John S. Bell's concept of local causality
NASA Astrophysics Data System (ADS)
Norsen, Travis
2011-12-01
John Stewart Bell's famous theorem is widely regarded as one of the most important developments in the foundations of physics. Yet even as we approach the 50th anniversary of Bell's discovery, its meaning and implications remain controversial. Many workers assert that Bell's theorem refutes the possibility suggested by Einstein, Podolsky, and Rosen (EPR) of supplementing ordinary quantum theory with ``hidden'' variables that might restore determinism and/or some notion of an observer-independent reality. But Bell himself interpreted the theorem very differently--as establishing an ``essential conflict'' between the well-tested empirical predictions of quantum theory and relativistic local causality. Our goal is to make Bell's own views more widely known and to explain Bell's little-known formulation of the concept of relativistic local causality on which his theorem rests. We also show precisely how Bell's formulation of local causality can be used to derive an empirically testable Bell-type inequality and to recapitulate the EPR argument.
John S. Bell's concept of local causality
NASA Astrophysics Data System (ADS)
Norsen, Travis
2011-12-01
John Stewart Bell's famous theorem is widely regarded as one of the most important developments in the foundations of physics. Yet even as we approach the 50th anniversary of Bell's discovery, its meaning and implications remain controversial. Many workers assert that Bell's theorem refutes the possibility suggested by Einstein, Podolsky, and Rosen (EPR) of supplementing ordinary quantum theory with "hidden" variables that might restore determinism and/or some notion of an observer-independent reality. But Bell himself interpreted the theorem very differently—as establishing an "essential conflict" between the well-tested empirical predictions of quantum theory and relativistic local causality. Our goal is to make Bell's own views more widely known and to explain Bell's little-known formulation of the concept of relativistic local causality on which his theorem rests. We also show precisely how Bell's formulation of local causality can be used to derive an empirically testable Bell-type inequality and to recapitulate the EPR argument.
Symmetric digit sets for elliptic curve scalar multiplication without precomputation
Heuberger, Clemens; Mazzoli, Michela
2014-01-01
We describe a method to perform scalar multiplication on two classes of ordinary elliptic curves, namely E:y2=x3+Ax in prime characteristic p≡1mod4, and E:y2=x3+B in prime characteristic p≡1mod3. On these curves, the 4-th and 6-th roots of unity act as (computationally efficient) endomorphisms. In order to optimise the scalar multiplication, we consider a width-w-NAF (Non-Adjacent Form) digit expansion of positive integers to the complex base of τ, where τ is a zero of the characteristic polynomial x2−tx+p of the Frobenius endomorphism associated to the curve. We provide a precomputationless algorithm by means of a convenient factorisation of the unit group of residue classes modulo τ in the endomorphism ring, whereby we construct a digit set consisting of powers of subgroup generators, which are chosen as efficient endomorphisms of the curve. PMID:25190900
Survival curve estimation with dependent left truncated data using Cox's model.
Mackenzie, Todd
2012-10-19
The Kaplan-Meier and closely related Lynden-Bell estimators are used to provide nonparametric estimation of the distribution of a left-truncated random variable. These estimators assume that the left-truncation variable is independent of the time-to-event. This paper proposes a semiparametric method for estimating the marginal distribution of the time-to-event that does not require independence. It models the conditional distribution of the time-to-event given the truncation variable using Cox's model for left truncated data, and uses inverse probability weighting. We report the results of simulations and illustrate the method using a survival study.
Bimodal SLD Ice Accretion on a NACA 0012 Airfoil Model
NASA Technical Reports Server (NTRS)
Potapczuk, Mark; Tsao, Jen-Ching; King-Steen, Laura
2016-01-01
This presentation describes the results of ice accretion measurements on a NACA 0012 airfoil model, from the NASA Icing Research Tunnel, using an icing cloud composed of a bimodal distribution of Supercooled Large Droplets. The data consists of photographs, laser scans of the ice surface, and measurements of the mass of ice for each icing condition. The results of ice shapes accumulated as a result of exposure to an icing cloud with a bimodal droplet distribution were compared to the ice shapes resulting from an equivalent cloud composed of a droplet distribution with a standard bell curve shape.
Note on the single-shock solutions of the Korteweg-de Vries-Burgers equation
NASA Astrophysics Data System (ADS)
Kourakis, Ioannis; Sultana, Sharmin; Verheest, Frank
2012-04-01
The well-known shock solutions of the Korteweg-de Vries-Burgers equation are revisited, together with their limitations in the context of plasma (astro)physical applications. Although available in the literature for a long time, it seems to have been forgotten in recent papers that such shocks are monotonic and unique, for a given plasma configuration, and cannot show oscillatory or bell-shaped features. This uniqueness is contrasted to solitary wave solutions of the two parent equations (Korteweg-de Vries and Burgers), which form a family of curves parameterized by the excess velocity over the linear phase speed.
Investigation of hollow cathode performance for 30-cm thrusters
NASA Technical Reports Server (NTRS)
Mirtich, M. J.
1973-01-01
A parametric investigation of 6.35 mm diameter mercury hollow cathodes was carried out in a bell jar. The parameters that were varied were the amount of initial emissive mix, the insert position, the emission current, the cathode temperature, the orifice diameter, and the mercury flow rate. Flow characteristic curves and performance as a function of time were obtained for the various cathodes of interest. Also presented are the results of a 3880 hr life test of a main cathode run at 15 amps emission current with no noticeable changes in keeper and collector voltages.
The design of an adaptive predictive coder using a single-chip digital signal processor
NASA Astrophysics Data System (ADS)
Randolph, M. A.
1985-01-01
A speech coding processor architecture design study has been performed in which Texas Instruments TMS32010 has been selected from among three commercially available digital signal processing integrated circuits and evaluated in an implementation study of real-time Adaptive Predictive Coding (APC). The TMS32010 has been compared with AR&T Bell Laboratories DSP I and Nippon Electric Co. PD7720 and was found to be most suitable for a single chip implementation of APC. A preliminary design system based on TMS32010 has been performed, and several of the hardware and software design issues are discussed. Particular attention was paid to the design of an external memory controller which permits rapid sequential access of external RAM. As a result, it has been determined that a compact hardware implementation of the APC algorithm is feasible based of the TSM32010. Originator-supplied keywords include: vocoders, speech compression, adaptive predictive coding, digital signal processing microcomputers, speech processor architectures, and special purpose processor.
Can violations of Bell's inequalities be considered as a final proof of quantum physics?
NASA Astrophysics Data System (ADS)
Hénault, François
2013-10-01
Nowadays, it is commonly admitted that the experimental violation of Bell's inequalities that was successfully demonstrated in the last decades by many experimenters, are indeed the ultimate proof of quantum physics and of its ability to describe the whole microscopic world and beyond. But the historical and scientific story may not be envisioned so clearly: it starts with the original paper of Einstein, Podolsky and Rosen (EPR) aiming at demonstrating that the formalism of quantum theory is incomplete. It then goes through the works of D. Bohm, to finally proceed to the famous John Bell's relationships providing an experimental setup to solve the EPR paradox. In this communication is proposed an alternative reading of this history, showing that modern experiments based on correlations between light polarizations significantly deviate from the original spirit of the EPR paper. It is concluded that current experimental violations of Bell's inequalities cannot be considered as an ultimate proof of the completeness of quantum physics models.
Experimental violation of Bell inequalities for multi-dimensional systems
Lo, Hsin-Pin; Li, Che-Ming; Yabushita, Atsushi; Chen, Yueh-Nan; Luo, Chih-Wei; Kobayashi, Takayoshi
2016-01-01
Quantum correlations between spatially separated parts of a d-dimensional bipartite system (d ≥ 2) have no classical analog. Such correlations, also called entanglements, are not only conceptually important, but also have a profound impact on information science. In theory the violation of Bell inequalities based on local realistic theories for d-dimensional systems provides evidence of quantum nonlocality. Experimental verification is required to confirm whether a quantum system of extremely large dimension can possess this feature, however it has never been performed for large dimension. Here, we report that Bell inequalities are experimentally violated for bipartite quantum systems of dimensionality d = 16 with the usual ensembles of polarization-entangled photon pairs. We also estimate that our entanglement source violates Bell inequalities for extremely high dimensionality of d > 4000. The designed scenario offers a possible new method to investigate the entanglement of multipartite systems of large dimensionality and their application in quantum information processing. PMID:26917246
A small effect of adding antiviral agents in treating patients with severe Bell palsy.
van der Veen, Erwin L; Rovers, Maroeska M; de Ru, J Alexander; van der Heijden, Geert J
2012-03-01
In this evidence-based case report, the authors studied the following clinical question: What is the effect of adding antiviral agents to corticosteroids in the treatment of patients with severe or complete Bell palsy? The search yielded 250 original research articles. The 6 randomized trials of these that could be used all reported low-quality data for answering the clinical question; apart from apparent flaws, they did not primarily include patients with severe or complete Bell palsy. Complete functional facial nerve recovery was seen in 75% of the patients receiving prednisolone only and in 83% with additional antiviral treatment. The pooled risk difference of 7% (95% confidence interval, -1% to 15%) results in a number needed to treat of 14 (ie, slightly favors adding an antiviral agent). The authors conclude that although a strong recommendation for adding antiviral agents to corticosteroids to further improve the recovery of patients with severe Bell palsy is precluded by the lack of robust evidence, it should be discussed with the patient.
Pore Water Pumping by Upside-Down Jellyfish
NASA Astrophysics Data System (ADS)
Gaddam, Manikantam; Santhanakrishnan, Arvind
2016-11-01
Patchy aggregations of Cassiopea medusae, commonly called upside-down jellyfish, are found in sheltered marine environments with low-speed ambient flows. These medusae exhibit a sessile, non-swimming lifestyle, and are oriented such that their bells are attached to the substrate and oral arms point towards sunlight. Pulsations of their bells are used to generate currents for suspension feeding. Their pulsations have also been proposed to generate forces that can release sediment locked nutrients into the surrounding water. The goal of this study is to examine pore water pumping by Cassiopea individuals in laboratory aquaria, as a model for understanding pore water pumping in unsteady flows. Planar laser-induced fluorescence (PLIF) measurements were conducted to visualize the release of pore water via bell motion, using fluorescent dye introduced underneath the substrate. 2D particle image velocimetry (PIV) measurements were conducted on the same individuals to correlate PLIF-based concentration profiles with the jets generated by pulsing of medusae. The effects of varying bell diameter on pore water release and pumping currents will be discussed.
Fast vision-based catheter 3D reconstruction
NASA Astrophysics Data System (ADS)
Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.
2016-07-01
Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of ±0.6 mm and ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.
Reflections on Three Corporate Research Labs: Bell Labs, HP Labs, Agilent Labs
NASA Astrophysics Data System (ADS)
Hollenhorst, James
2008-03-01
This will be a personal reflection on corporate life and physics-based research in three industrial research labs over three decades, Bell Labs during the 1980's, HP Labs during the 1990's, and Agilent Labs during the 2000's. These were times of great change in all three companies. I'll point out some of the similarities and differences in corporate cultures and how this impacted the research and development activities. Along the way I'll mention some of the great products that resulted from physics-based R&D.
Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Pardo-Vazquez, Jose L; Leboran, Victor; Molenberghs, Geert; Faes, Christel; Acuña, Carlos
2011-06-30
It is well established that neural activity is stochastically modulated over time. Therefore, direct comparisons across experimental conditions and determination of change points or maximum firing rates are not straightforward. This study sought to compare temporal firing probability curves that may vary across groups defined by different experimental conditions. Odds-ratio (OR) curves were used as a measure of comparison, and the main goal was to provide a global test to detect significant differences of such curves through the study of their derivatives. An algorithm is proposed that enables ORs based on generalized additive models, including factor-by-curve-type interactions to be flexibly estimated. Bootstrap methods were used to draw inferences from the derivatives curves, and binning techniques were applied to speed up computation in the estimation and testing processes. A simulation study was conducted to assess the validity of these bootstrap-based tests. This methodology was applied to study premotor ventral cortex neural activity associated with decision-making. The proposed statistical procedures proved very useful in revealing the neural activity correlates of decision-making in a visual discrimination task. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Chen, Li
1999-09-01
According to a general definition of discrete curves, surfaces, and manifolds (Li Chen, 'Generalized discrete object tracking algorithms and implementations, ' In Melter, Wu, and Latecki ed, Vision Geometry VI, SPIE Vol. 3168, pp 184 - 195, 1997.). This paper focuses on the Jordan curve theorem in 2D discrete spaces. The Jordan curve theorem says that a (simply) closed curve separates a simply connected surface into two components. Based on the definition of discrete surfaces, we give three reasonable definitions of simply connected spaces. Theoretically, these three definition shall be equivalent. We have proved the Jordan curve theorem under the third definition of simply connected spaces. The Jordan theorem shows the relationship among an object, its boundary, and its outside area. In continuous space, the boundary of an mD manifold is an (m - 1)D manifold. The similar result does apply to regular discrete manifolds. The concept of a new regular nD-cell is developed based on the regular surface point in 2D, and well-composed objects in 2D and 3D given by Latecki (L. Latecki, '3D well-composed pictures,' In Melter, Wu, and Latecki ed, Vision Geometry IV, SPIE Vol 2573, pp 196 - 203, 1995.).
Experimentally generated randomness certified by the impossibility of superluminal signals.
Bierhorst, Peter; Knill, Emanuel; Glancy, Scott; Zhang, Yanbao; Mink, Alan; Jordan, Stephen; Rommal, Andrea; Liu, Yi-Kai; Christensen, Bradley; Nam, Sae Woo; Stevens, Martin J; Shalm, Lynden K
2018-04-01
From dice to modern electronic circuits, there have been many attempts to build better devices to generate random numbers. Randomness is fundamental to security and cryptographic systems and to safeguarding privacy. A key challenge with random-number generators is that it is hard to ensure that their outputs are unpredictable 1-3 . For a random-number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model that describes the underlying physics is necessary to assert unpredictability. Imperfections in the model compromise the integrity of the device. However, it is possible to exploit the phenomenon of quantum non-locality with a loophole-free Bell test to build a random-number generator that can produce output that is unpredictable to any adversary that is limited only by general physical principles, such as special relativity 1-11 . With recent technological developments, it is now possible to carry out such a loophole-free Bell test 12-14,22 . Here we present certified randomness obtained from a photonic Bell experiment and extract 1,024 random bits that are uniformly distributed to within 10 -12 . These random bits could not have been predicted according to any physical theory that prohibits faster-than-light (superluminal) signalling and that allows independent measurement choices. To certify and quantify the randomness, we describe a protocol that is optimized for devices that are characterized by a low per-trial violation of Bell inequalities. Future random-number generators based on loophole-free Bell tests may have a role in increasing the security and trust of our cryptographic systems and infrastructure.
NASA Technical Reports Server (NTRS)
Ruf, Joseph H.; Jones, Daniel
2015-01-01
The dual-bell nozzle (fig. 1) is an altitude-compensating nozzle that has an inner contour consisting of two overlapped bells. At low altitudes, the dual-bell nozzle operates in mode 1, only utilizing the smaller, first bell of the nozzle. In mode 1, the nozzle flow separates from the wall at the inflection point between the two bell contours. As the vehicle reaches higher altitudes, the dual-bell nozzle flow transitions to mode 2, to flow full into the second, larger bell. This dual-mode operation allows near optimal expansion at two altitudes, enabling a higher mission average specific impulse (Isp) relative to that of a conventional, single-bell nozzle. Dual-bell nozzles have been studied analytically and subscale nozzle tests have been completed.1 This higher mission averaged Isp can provide up to a 5% increase2 in payload to orbit for existing launch vehicles. The next important step for the dual-bell nozzle is to confirm its potential in a relevant flight environment. Toward this end, NASA Marshall Space Flight Center (MSFC) and Armstrong Flight Research Center (AFRC) have been working to develop a subscale, hot-fire, dual-bell nozzle test article for flight testing on AFRC's F15-D flight test bed (figs. 2 and 3). Flight test data demonstrating a dual-bell ability to control the mode transition and result in a sufficient increase in a rocket's mission averaged Isp should help convince the launch service providers that the dual-bell nozzle would provide a return on the required investment to bring a dual-bell into flight operation. The Game Changing Department provided 0.2 FTE to ER42 for this effort in 2014.
Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data
Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin
2014-01-01
Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563
Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image
NASA Astrophysics Data System (ADS)
Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti
2016-06-01
An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.
78 FR 56592 - Airworthiness Directives; Bell Helicopter Textron, Inc. (Bell) Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-13
... Airworthiness Directives; Bell Helicopter Textron, Inc. (Bell) Helicopters AGENCY: Federal Aviation...) 76-12- 07 for all Bell Model 204B and certain serial-numbered Model 205A-1 helicopters with a certain... detect a crack in the link segments and, for affected Model 205A-1 helicopters, replacing the chain and...
Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits †
Gámez Serna, Citlalli; Ruichek, Yassine
2017-01-01
A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle’s speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, ‘curve analysis extraction’ and ‘speed limits database creation’ are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger. PMID:28613251
Near-optimal matrix recovery from random linear measurements.
Romanov, Elad; Gavish, Matan
2018-06-25
In matrix recovery from random linear measurements, one is interested in recovering an unknown M-by-N matrix [Formula: see text] from [Formula: see text] measurements [Formula: see text], where each [Formula: see text] is an M-by-N measurement matrix with i.i.d. random entries, [Formula: see text] We present a matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular-value shrinker-a nonconvex nonlinearity tailored specifically for matrix estimation. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for nuclear norm minimization (NNM). It is well known that there is a recovery tradeoff between the information content of the object [Formula: see text] to be recovered (specifically, its matrix rank r) and the number of linear measurements n from which recovery is to be attempted. The precise tradeoff between r and n, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the [Formula: see text] plane. The phase transition curve of our algorithm is noticeably better than that of NNM. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state of the art in terms of convergence rate, but also near optimal in terms of the matrices it successfully recovers. Copyright © 2018 the Author(s). Published by PNAS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faigler, S.; Mazeh, T.; Tal-Or, L.
We present seven newly discovered non-eclipsing short-period binary systems with low-mass companions, identified by the recently introduced BEER algorithm, applied to the publicly available 138-day photometric light curves obtained by the Kepler mission. The detection is based on the beaming effect (sometimes called Doppler boosting), which increases (decreases) the brightness of any light source approaching (receding from) the observer, enabling a prediction of the stellar Doppler radial-velocity (RV) modulation from its precise photometry. The BEER algorithm identifies the BEaming periodic modulation, with a combination of the well-known Ellipsoidal and Reflection/heating periodic effects, induced by short-period companions. The seven detections weremore » confirmed by spectroscopic RV follow-up observations, indicating minimum secondary masses in the range 0.07-0.4 M{sub Sun }. The binaries discovered establish for the first time the feasibility of the BEER algorithm as a new detection method for short-period non-eclipsing binaries, with the potential to detect in the near future non-transiting brown-dwarf secondaries, or even massive planets.« less
Closed geometric models in medical applications
NASA Astrophysics Data System (ADS)
Jagannathan, Lakshmipathy; Nowinski, Wieslaw L.; Raphel, Jose K.; Nguyen, Bonnie T.
1996-04-01
Conventional surface fitting methods give twisted surfaces and complicates capping closures. This is a typical character of surfaces that lack rectangular topology. We suggest an algorithm which overcomes these limitations. The analysis of the algorithm is presented with experimental results. This algorithm assumes the mass center lying inside the object. Both capping closure and twisting are results of inadequate information on the geometric proximity of the points and surfaces which are proximal in the parametric space. Geometric proximity at the contour level is handled by mapping the points along the contour onto a hyper-spherical space. The resulting angular gradation with respect to the centroid is monotonic and hence avoids the twisting problem. Inter-contour geometric proximity is achieved by partitioning the point set based on the angle it makes with the respective centroids. Avoidance of capping complications is achieved by generating closed cross curves connecting curves which are reflections about the abscissa. The method is of immense use for the generation of the deep cerebral structures and is applied to the deep structures generated from the Schaltenbrand- Wahren brain atlas.
Molecular-Level Simulations of Shock Generation and Propagation in Soda-Lime Glass
2012-08-01
Molecular-Level Simulations of Shock Generation and Propagation in Soda-Lime Glass M. Grujicic, W.C. Bell, B. Pandurangan, B.A. Cheeseman, C ...transparent structures with thickness approaching several inches; (b) relatively low material and manufacturing costs; and ( c ) compositional modifications... c ) models based on explicit crack representation (Ref 15, 16). Since a M. Grujicic, W.C. Bell, and B. Pandurangan, Department of Mec- hanical
What Bell proved: A reply to Blaylock
NASA Astrophysics Data System (ADS)
Maudlin, Tim
2010-01-01
Blaylock argues that the derivation of Bell's inequality requires a hidden assumption, counterfactual definiteness, of which Bell was unaware. A careful analysis of Bell's argument shows that Bell presupposes only locality and the predictions of standard quantum mechanics. Counterfactual definiteness, insofar as it is required, is derived in the course of the argument rather than presumed. Bell's theorem has no direct bearing on the many worlds interpretation not because that interpretation denies counterfactual definiteness but because it does not recover the predictions of standard quantum mechanics.
Zhang, Baolin; Tong, Xinglin; Hu, Pan; Guo, Qian; Zheng, Zhiyuan; Zhou, Chaoran
2016-12-26
Optical fiber Fabry-Perot (F-P) sensors have been used in various on-line monitoring of physical parameters such as acoustics, temperature and pressure. In this paper, a wavelet phase extracting demodulation algorithm for optical fiber F-P sensing is first proposed. In application of this demodulation algorithm, search range of scale factor is determined by estimated cavity length which is obtained by fast Fourier transform (FFT) algorithm. Phase information of each point on the optical interference spectrum can be directly extracted through the continuous complex wavelet transform without de-noising. And the cavity length of the optical fiber F-P sensor is calculated by the slope of fitting curve of the phase. Theorical analysis and experiment results show that this algorithm can greatly reduce the amount of computation and improve demodulation speed and accuracy.
Esposito, Fabrizio; Formisano, Elia; Seifritz, Erich; Goebel, Rainer; Morrone, Renato; Tedeschi, Gioacchino; Di Salle, Francesco
2002-07-01
Independent component analysis (ICA) has been successfully employed to decompose functional MRI (fMRI) time-series into sets of activation maps and associated time-courses. Several ICA algorithms have been proposed in the neural network literature. Applied to fMRI, these algorithms might lead to different spatial or temporal readouts of brain activation. We compared the two ICA algorithms that have been used so far for spatial ICA (sICA) of fMRI time-series: the Infomax (Bell and Sejnowski [1995]: Neural Comput 7:1004-1034) and the Fixed-Point (Hyvärinen [1999]: Adv Neural Inf Proc Syst 10:273-279) algorithms. We evaluated the Infomax- and Fixed Point-based sICA decompositions of simulated motor, and real motor and visual activation fMRI time-series using an ensemble of measures. Log-likelihood (McKeown et al. [1998]: Hum Brain Mapp 6:160-188) was used as a measure of how significantly the estimated independent sources fit the statistical structure of the data; receiver operating characteristics (ROC) and linear correlation analyses were used to evaluate the algorithms' accuracy of estimating the spatial layout and the temporal dynamics of simulated and real activations; cluster sizing calculations and an estimation of a residual gaussian noise term within the components were used to examine the anatomic structure of ICA components and for the assessment of noise reduction capabilities. Whereas both algorithms produced highly accurate results, the Fixed-Point outperformed the Infomax in terms of spatial and temporal accuracy as long as inferential statistics were employed as benchmarks. Conversely, the Infomax sICA was superior in terms of global estimation of the ICA model and noise reduction capabilities. Because of its adaptive nature, the Infomax approach appears to be better suited to investigate activation phenomena that are not predictable or adequately modelled by inferential techniques. Copyright 2002 Wiley-Liss, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-22
... License Application for Bell Bend Nuclear Power Plant; Exemption 1.0 Background PPL Bell Bend, LLC... Regulations (10 CFR), Subpart C of Part 52, ``Licenses, Certifications, and Approvals for Nuclear Power Plants.'' This reactor is to be identified as Bell Bend Nuclear Power Plant (BBNPP), in Salem County...
Retinal image quality assessment based on image clarity and content
NASA Astrophysics Data System (ADS)
Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim
2016-09-01
Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.
NASA Astrophysics Data System (ADS)
Kazakis, Nikolaos A.
2018-01-01
The present comment concerns the correct presentation of an algorithm proposed in the above paper for the glow-curve deconvolution in the case of continuous distribution of trapping states. Since most researchers would use directly the proposed algorithm as published, they should be notified of its correct formulation during the fitting of TL glow curves of materials with continuous trap distribution using this Equation.
Majorana neutrino signals at Belle-II and ILC
NASA Astrophysics Data System (ADS)
Yue, Chong-Xing; Guo, Yu-Chen; Zhao, Zhen-Hua
2017-12-01
For some theoretical and experimental considerations, the relatively light Majorana neutrinos at the GeV scale have been attracting some interest. In this article we consider a scenario with only one Majorana neutrino N, negligible mixing with the active neutrinos νL, where the Majorana neutrino interactions could be described in a model independent approach based on an effective theory. Under such a framework, we particularly study the feasibility of observing the N with mass in the range 0-30 GeV via the process e+e- → νN → γ + E̸ in the future Belle-II and ILC experiments. The results show that it is unpromising for Belle-II to observe the signal, while ILC may easily make a discovery for the Majorana neutrino.
NOTE: A BPF-type algorithm for CT with a curved PI detector
NASA Astrophysics Data System (ADS)
Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping
2006-08-01
Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941 59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.
A BPF-type algorithm for CT with a curved PI detector.
Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping
2006-08-21
Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941-59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam-Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam-Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.
PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less
Crash testing difference-smoothing algorithm on a large sample of simulated light curves from TDC1
NASA Astrophysics Data System (ADS)
Rathna Kumar, S.
2017-09-01
In this work, we propose refinements to the difference-smoothing algorithm for the measurement of time delay from the light curves of the images of a gravitationally lensed quasar. The refinements mainly consist of a more pragmatic approach to choose the smoothing time-scale free parameter, generation of more realistic synthetic light curves for the estimation of time delay uncertainty and using a plot of normalized χ2 computed over a wide range of trial time delay values to assess the reliability of a measured time delay and also for identifying instances of catastrophic failure. We rigorously tested the difference-smoothing algorithm on a large sample of more than thousand pairs of simulated light curves having known true time delays between them from the two most difficult 'rungs' - rung3 and rung4 - of the first edition of Strong Lens Time Delay Challenge (TDC1) and found an inherent tendency of the algorithm to measure the magnitude of time delay to be higher than the true value of time delay. However, we find that this systematic bias is eliminated by applying a correction to each measured time delay according to the magnitude and sign of the systematic error inferred by applying the time delay estimator on synthetic light curves simulating the measured time delay. Following these refinements, the TDC performance metrics for the difference-smoothing algorithm are found to be competitive with those of the best performing submissions of TDC1 for both the tested 'rungs'. The MATLAB codes used in this work and the detailed results are made publicly available.
Classification of ASKAP Vast Radio Light Curves
NASA Technical Reports Server (NTRS)
Rebbapragada, Umaa; Lo, Kitty; Wagstaff, Kiri L.; Reed, Colorado; Murphy, Tara; Thompson, David R.
2012-01-01
The VAST survey is a wide-field survey that observes with unprecedented instrument sensitivity (0.5 mJy or lower) and repeat cadence (a goal of 5 seconds) that will enable novel scientific discoveries related to known and unknown classes of radio transients and variables. Given the unprecedented observing characteristics of VAST, it is important to estimate source classification performance, and determine best practices prior to the launch of ASKAP's BETA in 2012. The goal of this study is to identify light curve characterization and classification algorithms that are best suited for archival VAST light curve classification. We perform our experiments on light curve simulations of eight source types and achieve best case performance of approximately 90% accuracy. We note that classification performance is most influenced by light curve characterization rather than classifier algorithm.
Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu
2016-01-01
Small fields smaller than 4×4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model‐based algorithms, X‐ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS‐Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth‐of‐dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth‐dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1×1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1×1 cm2 field showed maximum deviation, except in 6 MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower‐density materials compared to high‐density materials. PACS numbers: 87.53.Bn, 87.53.kn, 87.56.bd, 87.55.Kd, 87.56.jf PMID:26894345
NASA Astrophysics Data System (ADS)
Gallon, Régis K.; Lavesque, Nicolas; Grall, Jacques; Labrune, Céline; Gremare, Antoine; Bachelet, Guy; Blanchet, Hugues; Bonifácio, Paulo; Bouchet, Vincent M. P.; Dauvin, Jean-Claude; Desroy, Nicolas; Gentil, Franck; Guerin, Laurent; Houbin, Céline; Jourde, Jérôme; Laurand, Sandrine; Le Duff, Michel; Le Garrec, Vincent; de Montaudouin, Xavier; Olivier, Frédéric; Orvain, Francis; Sauriau, Pierre-Guy; Thiebaut, Éric; Gauthier, Olivier
2017-12-01
This study aims to describe the patterns of soft bottom macrozoobenthic richness along French coasts. It is based on a collaborative database developed by the "Réseau des Stations et Observatoires Marins" (RESOMAR). We investigated patterns of species richness in sublittoral soft bottom habitats (EUNIS level 3) at two different spatial scales: 1) seaboards: English Channel, Bay of Biscay and Mediterranean Sea and 2) 0.5° latitudinal and longitudinal grid. Total observed richness, rarefaction curves and three incidence-based richness estimators (Chao2, ICE and Jacknife1) were used to compare soft bottom habitats species richness in each seaboard. Overall, the Mediterranean Sea has the highest richness and despite higher sampling effort, the English Channel hosts the lowest number of species. The distribution of species occurrence within and between seaboards was assessed for each major phylum using constrained rarefaction curves. The Mediterranean Sea hosts the highest number of exclusive species. In pairwise comparisons, it also shares a lower proportion of taxa with the Bay of Biscay (34.1%) or the English Channel (27.6%) than that shared between these two seaboards (49.7%). Latitudinal species richness patterns along the Atlantic and English Channel coasts were investigated for each major phylum using partial LOESS regression controlling for sampling effort. This showed the existence of a bell-shaped latitudinal pattern, highlighting Brittany as a hotspot for macrobenthic richness at the confluence of two biogeographic provinces.
Delahanty, Ryan J; Kaufman, David; Jones, Spencer S
2018-06-01
Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death score has many attractive attributes that address the key barriers to adoption of ICU risk adjustment algorithms and performs comparably to existing human-intensive algorithms. Automated risk adjustment algorithms have the potential to obviate known barriers to adoption such as cost-prohibitive licensing fees and significant direct labor costs. Further evaluation is needed to ensure that the level of performance observed in this study could be achieved at independent sites.
NASA Technical Reports Server (NTRS)
Gedney, Stephen D.; Lansing, Faiza
1993-01-01
The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li
2013-01-01
In Ramsay curve item response theory (RC-IRT, Woods & Thissen, 2006) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's (1981) EM algorithm, which yields maximum marginal likelihood estimates. This method, however,…
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li
2014-01-01
In Ramsay curve item response theory (RC-IRT) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's EM algorithm, which yields maximum marginal likelihood estimates. This method, however, does not produce the…
An Algorithm for Protein Helix Assignment Using Helix Geometry
Cao, Chen; Xu, Shutan; Wang, Lincong
2015-01-01
Helices are one of the most common and were among the earliest recognized secondary structure elements in proteins. The assignment of helices in a protein underlies the analysis of its structure and function. Though the mathematical expression for a helical curve is simple, no previous assignment programs have used a genuine helical curve as a model for helix assignment. In this paper we present a two-step assignment algorithm. The first step searches for a series of bona fide helical curves each one best fits the coordinates of four successive backbone Cα atoms. The second step uses the best fit helical curves as input to make helix assignment. The application to the protein structures in the PDB (protein data bank) proves that the algorithm is able to assign accurately not only regular α-helix but also 310 and π helices as well as their left-handed versions. One salient feature of the algorithm is that the assigned helices are structurally more uniform than those by the previous programs. The structural uniformity should be useful for protein structure classification and prediction while the accurate assignment of a helix to a particular type underlies structure-function relationship in proteins. PMID:26132394
A FEM-based method to determine the complex material properties of piezoelectric disks.
Pérez, N; Carbonari, R C; Andrade, M A B; Buiochi, F; Adamowski, J C
2014-08-01
Numerical simulations allow modeling piezoelectric devices and ultrasonic transducers. However, the accuracy in the results is limited by the precise knowledge of the elastic, dielectric and piezoelectric properties of the piezoelectric material. To introduce the energy losses, these properties can be represented by complex numbers, where the real part of the model essentially determines the resonance frequencies and the imaginary part determines the amplitude of each resonant mode. In this work, a method based on the Finite Element Method (FEM) is modified to obtain the imaginary material properties of piezoelectric disks. The material properties are determined from the electrical impedance curve of the disk, which is measured by an impedance analyzer. The method consists in obtaining the material properties that minimize the error between experimental and numerical impedance curves over a wide range of frequencies. The proposed methodology starts with a sensitivity analysis of each parameter, determining the influence of each parameter over a set of resonant modes. Sensitivity results are used to implement a preliminary algorithm approaching the solution in order to avoid the search to be trapped into a local minimum. The method is applied to determine the material properties of a Pz27 disk sample from Ferroperm. The obtained properties are used to calculate the electrical impedance curve of the disk with a Finite Element algorithm, which is compared with the experimental electrical impedance curve. Additionally, the results were validated by comparing the numerical displacement profile with the displacements measured by a laser Doppler vibrometer. The comparison between the numerical and experimental results shows excellent agreement for both electrical impedance curve and for the displacement profile over the disk surface. The agreement between numerical and experimental displacement profiles shows that, although only the electrical impedance curve is considered in the adjustment procedure, the obtained material properties allow simulating the displacement amplitude accurately. Copyright © 2014 Elsevier B.V. All rights reserved.
A Hough Transform Global Probabilistic Approach to Multiple-Subject Diffusion MRI Tractography
Aganj, Iman; Lenglet, Christophe; Jahanshad, Neda; Yacoub, Essa; Harel, Noam; Thompson, Paul M.; Sapiro, Guillermo
2011-01-01
A global probabilistic fiber tracking approach based on the voting process provided by the Hough transform is introduced in this work. The proposed framework tests candidate 3D curves in the volume, assigning to each one a score computed from the diffusion images, and then selects the curves with the highest scores as the potential anatomical connections. The algorithm avoids local minima by performing an exhaustive search at the desired resolution. The technique is easily extended to multiple subjects, considering a single representative volume where the registered high-angular resolution diffusion images (HARDI) from all the subjects are non-linearly combined, thereby obtaining population-representative tracts. The tractography algorithm is run only once for the multiple subjects, and no tract alignment is necessary. We present experimental results on HARDI volumes, ranging from simulated and 1.5T physical phantoms to 7T and 4T human brain and 7T monkey brain datasets. PMID:21376655
Spectral analysis of stellar light curves by means of neural networks
NASA Astrophysics Data System (ADS)
Tagliaferri, R.; Ciaramella, A.; Milano, L.; Barone, F.; Longo, G.
1999-06-01
Periodicity analysis of unevenly collected data is a relevant issue in several scientific fields. In astrophysics, for example, we have to find the fundamental period of light or radial velocity curves which are unevenly sampled observations of stars. Classical spectral analysis methods are unsatisfactory to solve the problem. In this paper we present a neural network based estimator system which performs well the frequency extraction in unevenly sampled signals. It uses an unsupervised Hebbian nonlinear neural algorithm to extract, from the interpolated signal, the principal components which, in turn, are used by the MUSIC frequency estimator algorithm to extract the frequencies. The neural network is tolerant to noise and works well also with few points in the sequence. We benchmark the system on synthetic and real signals with the Periodogram and with the Cramer-Rao lower bound. This work was been partially supported by IIASS, by MURST 40\\% and by the Italian Space Agency.
DOT National Transportation Integrated Search
2018-04-01
Crashes occur every day on Utahs highways. Curves can be particularly dangerous as they require driver focus due to potentially unseen hazards. Often, crashes occur on curves due to poor curve geometry, a lack of warning signs, or poor surface con...
46 CFR 78.47-7 - General alarm bells.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 3 2010-10-01 2010-10-01 false General alarm bells. 78.47-7 Section 78.47-7 Shipping... and Emergency Equipment, Etc. § 78.47-7 General alarm bells. (a) All general alarm bells shall be identified by red lettering at least 1/2 inch high: “GENERAL ALARM—WHEN BELL RINGS GO TO YOUR STATION.” (b...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-24
... NUCLEAR REGULATORY COMMISSION [Docket No. 52-039; NRC-2008-0603] PPL Bell Bend, LLC; Bell Bend... October 18, 2013 request from PPL Bell Bend, LLC (PPL). PPL requested an exemption from certain regulatory... Bend, LLC (PPL) submitted to the U.S. Nuclear Regulatory Commission (NRC) a Combined License (COL...
Jogenfors, Jonathan; Elhassan, Ashraf Mohamed; Ahrens, Johan; Bourennane, Mohamed; Larsson, Jan-Åke
2015-12-01
Photonic systems based on energy-time entanglement have been proposed to test local realism using the Bell inequality. A violation of this inequality normally also certifies security of device-independent quantum key distribution (QKD) so that an attacker cannot eavesdrop or control the system. We show how this security test can be circumvented in energy-time entangled systems when using standard avalanche photodetectors, allowing an attacker to compromise the system without leaving a trace. We reach Bell values up to 3.63 at 97.6% faked detector efficiency using tailored pulses of classical light, which exceeds even the quantum prediction. This is the first demonstration of a violation-faking source that gives both tunable violation and high faked detector efficiency. The implications are severe: the standard Clauser-Horne-Shimony-Holt inequality cannot be used to show device-independent security for energy-time entanglement setups based on Franson's configuration. However, device-independent security can be reestablished, and we conclude by listing a number of improved tests and experimental setups that would protect against all current and future attacks of this type.
Payet, Marcel D; Goodfriend, Theodore L; Bilodeau, Lyne; Mackendale, Cherilu; Chouinard, Lucie; Gallo-Payet, Nicole
2006-12-01
EKODE, an epoxy-keto derivative of linoleic acid, was previously shown to stimulate aldosterone secretion in rat adrenal glomerulosa cells. In the present study, we investigated the effect of exogenous EKODE on cytosolic [Ca(2+)] increase and aimed to elucidate the mechanism involved in this process. Through the use of the fluorescent Ca(2+)-sensitive dye Fluo-4, EKODE was shown to rapidly increase intracellular [Ca(2+)] ([Ca(2+)](i)) along a bell-shaped dose-response relationship with a maximum peak at 5 microM. Experiments performed in the presence or absence of Ca(2+) revealed that this increase in [Ca(2+)](i) originated exclusively from intracellular pools. EKODE-induced [Ca(2+)](i) increase was blunted by prior application of angiotensin II, Xestospongin C, and cyclopiazonic acid, indicating that inositol trisphosphate (InsP(3))-sensitive Ca(2+) stores can be mobilized by EKODE despite the absence of InsP(3) production. Accordingly, EKODE response was not sensitive to the phospholipase C inhibitor U-73122. EKODE mobilized a Ca(2+) store included in the thapsigargin (TG)-sensitive stores, although the interaction between EKODE and TG appears complex, since EKODE added at the plateau response of TG induced a rapid drop in [Ca(2+)](i). 9-oxo-octadecadienoic acid, another oxidized derivative of linoleic acid, also increases [Ca(2+)](i), with a dose-response curve similar to EKODE. However, arachidonic and linoleic acids at 10 microM failed to increase [Ca(2+)](i) but did reduce the amplitude of the response to EKODE. It is concluded that EKODE mobilizes Ca(2+) from an InsP(3)-sensitive store and that this [Ca(2+)](i) increase is responsible for aldosterone secretion by glomerulosa cells. Similar bell-shaped dose-response curves for aldosterone and [Ca(2+)](i) increases reinforce this hypothesis.
Gronseth, Gary S; Paduga, Remia
2012-11-27
To review evidence published since the 2001 American Academy of Neurology (AAN) practice parameter regarding the effectiveness, safety, and tolerability of steroids and antiviral agents for Bell palsy. We searched Medline and the Cochrane Database of Controlled Clinical Trials for studies published since January 2000 that compared facial functional outcomes in patients with Bell palsy receiving steroids/antivirals with patients not receiving these medications. We graded each study (Class I-IV) using the AAN therapeutic classification of evidence scheme. We compared the proportion of patients recovering facial function in the treated group with the proportion of patients recovering facial function in the control group. Nine studies published since June 2000 on patients with Bell palsy receiving steroids/antiviral agents were identified. Two of these studies were rated Class I because of high methodologic quality. For patients with new-onset Bell palsy, steroids are highly likely to be effective and should be offered to increase the probability of recovery of facial nerve function (2 Class I studies, Level A) (risk difference 12.8%-15%). For patients with new-onset Bell palsy, antiviral agents in combination with steroids do not increase the probability of facial functional recovery by >7%. Because of the possibility of a modest increase in recovery, patients might be offered antivirals (in addition to steroids) (Level C). Patients offered antivirals should be counseled that a benefit from antivirals has not been established, and, if there is a benefit, it is likely that it is modest at best.
Hexahedral mesh generation via the dual arrangement of surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, S.A.; Tautges, T.J.
1997-12-31
Given a general three-dimensional geometry with a prescribed quadrilateral surface mesh, the authors consider the problem of constructing a hexahedral mesh of the geometry whose boundary is exactly the prescribed surface mesh. Due to the specialized topology of hexahedra, this problem is more difficult than the analogous one for tetrahedra. Folklore has maintained that a surface mesh must have a constrained structure in order for there to exist a compatible hexahedral mesh. However, they have proof that a surface mesh need only satisfy mild parity conditions, depending on the topology of the three-dimensional geometry, for there to exist a compatiblemore » hexahedral mesh. The proof is based on the realization that a hexahedral mesh is dual to an arrangement of surfaces, and the quadrilateral surface mesh is dual to the arrangement of curves bounding these surfaces. The proof is constructive and they are currently developing an algorithm called Whisker Weaving (WW) that mirrors the proof steps. Given the bounding curves, WW builds the topological structure of an arrangement of surfaces having those curves as its boundary. WW progresses in an advancing front manner. Certain local rules are applied to avoid structures that lead to poor mesh quality. Also, after the arrangement is constructed, additional surfaces are inserted to separate features, so e.g., no two hexahedra share more than one quadrilateral face. The algorithm has generated meshes for certain non-trivial problems, but is currently unreliable. The authors are exploring strategies for consistently selecting which portion of the surface arrangement to advance based on the existence proof. This should lead us to a robust algorithm for arbitrary geometries and surface meshes.« less
Cryptanalysis of a semi-quantum secret sharing scheme based on Bell states
NASA Astrophysics Data System (ADS)
Gao, Gan; Wang, Yue; Wang, Dong
2018-03-01
In the paper [Mod. Phys. Lett. B 31 (2017) 1750150], Yin et al. proposed a semi-quantum secret sharing scheme by using Bell states. We find that the proposed scheme cannot finish the quantum secret sharing task. In addition, we also find that the proposed scheme has a security loophole, that is, it will not be detected that the dishonest participant, Charlie attacks on the quantum channel.
Three-observer Bell inequality violation on a two-qubit entangled state
NASA Astrophysics Data System (ADS)
Schiavon, Matteo; Calderaro, Luca; Pittaluga, Mirko; Vallone, Giuseppe; Villoresi, Paolo
2017-03-01
Bipartite Bell inequalities can simultaneously be violated by two different pairs of observers when weak measurements and signalling is employed. Here, we experimentally demonstrate the violation of two simultaneous CHSH inequalities by exploiting a two-photon polarisation maximally entangled state. Our results demonstrate that large double violation is experimentally achievable. Our demonstration may have impact for Quantum Key Distribution or certification of Quantum Random Number generators based on weak measurements.
Planetary Transmission Diagnostics
NASA Technical Reports Server (NTRS)
Lewicki, David G. (Technical Monitor); Samuel, Paul D.; Conroy, Joseph K.; Pines, Darryll J.
2004-01-01
This report presents a methodology for detecting and diagnosing gear faults in the planetary stage of a helicopter transmission. This diagnostic technique is based on the constrained adaptive lifting algorithm. The lifting scheme, developed by Wim Sweldens of Bell Labs, is a time domain, prediction-error realization of the wavelet transform that allows for greater flexibility in the construction of wavelet bases. Classic lifting analyzes a given signal using wavelets derived from a single fundamental basis function. A number of researchers have proposed techniques for adding adaptivity to the lifting scheme, allowing the transform to choose from a set of fundamental bases the basis that best fits the signal. This characteristic is desirable for gear diagnostics as it allows the technique to tailor itself to a specific transmission by selecting a set of wavelets that best represent vibration signals obtained while the gearbox is operating under healthy-state conditions. However, constraints on certain basis characteristics are necessary to enhance the detection of local wave-form changes caused by certain types of gear damage. The proposed methodology analyzes individual tooth-mesh waveforms from a healthy-state gearbox vibration signal that was generated using the vibration separation (synchronous signal-averaging) algorithm. Each waveform is separated into analysis domains using zeros of its slope and curvature. The bases selected in each analysis domain are chosen to minimize the prediction error, and constrained to have the same-sign local slope and curvature as the original signal. The resulting set of bases is used to analyze future-state vibration signals and the lifting prediction error is inspected. The constraints allow the transform to effectively adapt to global amplitude changes, yielding small prediction errors. However, local wave-form changes associated with certain types of gear damage are poorly adapted, causing a significant change in the prediction error. The constrained adaptive lifting diagnostic algorithm is validated using data collected from the University of Maryland Transmission Test Rig and the results are discussed.
A programmable two-qubit quantum processor in silicon
NASA Astrophysics Data System (ADS)
Watson, T. F.; Philips, S. G. J.; Kawakami, E.; Ward, D. R.; Scarlino, P.; Veldhorst, M.; Savage, D. E.; Lagally, M. G.; Friesen, Mark; Coppersmith, S. N.; Eriksson, M. A.; Vandersypen, L. M. K.
2018-03-01
Now that it is possible to achieve measurement and control fidelities for individual quantum bits (qubits) above the threshold for fault tolerance, attention is moving towards the difficult task of scaling up the number of physical qubits to the large numbers that are needed for fault-tolerant quantum computing. In this context, quantum-dot-based spin qubits could have substantial advantages over other types of qubit owing to their potential for all-electrical operation and ability to be integrated at high density onto an industrial platform. Initialization, readout and single- and two-qubit gates have been demonstrated in various quantum-dot-based qubit representations. However, as seen with small-scale demonstrations of quantum computers using other types of qubit, combining these elements leads to challenges related to qubit crosstalk, state leakage, calibration and control hardware. Here we overcome these challenges by using carefully designed control techniques to demonstrate a programmable two-qubit quantum processor in a silicon device that can perform the Deutsch–Josza algorithm and the Grover search algorithm—canonical examples of quantum algorithms that outperform their classical analogues. We characterize the entanglement in our processor by using quantum-state tomography of Bell states, measuring state fidelities of 85–89 per cent and concurrences of 73–82 per cent. These results pave the way for larger-scale quantum computers that use spins confined to quantum dots.
NASA Astrophysics Data System (ADS)
Oh, Hyun-Joo; Pradhan, Biswajeet
2011-09-01
This paper presents landslide-susceptibility mapping using an adaptive neuro-fuzzy inference system (ANFIS) using a geographic information system (GIS) environment. In the first stage, landslide locations from the study area were identified by interpreting aerial photographs and supported by an extensive field survey. In the second stage, landslide-related conditioning factors such as altitude, slope angle, plan curvature, distance to drainage, distance to road, soil texture and stream power index (SPI) were extracted from the topographic and soil maps. Then, landslide-susceptible areas were analyzed by the ANFIS approach and mapped using landslide-conditioning factors. In particular, various membership functions (MFs) were applied for the landslide-susceptibility mapping and their results were compared with the field-verified landslide locations. Additionally, the receiver operating characteristics (ROC) curve for all landslide susceptibility maps were drawn and the areas under curve values were calculated. The ROC curve technique is based on the plotting of model sensitivity — true positive fraction values calculated for different threshold values, versus model specificity — true negative fraction values, on a graph. Landslide test locations that were not used during the ANFIS modeling purpose were used to validate the landslide susceptibility maps. The validation results revealed that the susceptibility maps constructed by the ANFIS predictive models using triangular, trapezoidal, generalized bell and polynomial MFs produced reasonable results (84.39%), which can be used for preliminary land-use planning. Finally, the authors concluded that ANFIS is a very useful and an effective tool in regional landslide susceptibility assessment.
The SARS algorithm: detrending CoRoT light curves with Sysrem using simultaneous external parameters
NASA Astrophysics Data System (ADS)
Ofir, Aviv; Alonso, Roi; Bonomo, Aldo Stefano; Carone, Ludmila; Carpano, Stefania; Samuel, Benjamin; Weingrill, Jörg; Aigrain, Suzanne; Auvergne, Michel; Baglin, Annie; Barge, Pierre; Borde, Pascal; Bouchy, Francois; Deeg, Hans J.; Deleuil, Magali; Dvorak, Rudolf; Erikson, Anders; Mello, Sylvio Ferraz; Fridlund, Malcolm; Gillon, Michel; Guillot, Tristan; Hatzes, Artie; Jorda, Laurent; Lammer, Helmut; Leger, Alain; Llebaria, Antoine; Moutou, Claire; Ollivier, Marc; Päetzold, Martin; Queloz, Didier; Rauer, Heike; Rouan, Daniel; Schneider, Jean; Wuchterl, Guenther
2010-05-01
Surveys for exoplanetary transits are usually limited not by photon noise but rather by the amount of red noise in their data. In particular, although the CoRoT space-based survey data are being carefully scrutinized, significant new sources of systematic noises are still being discovered. Recently, a magnitude-dependant systematic effect was discovered in the CoRoT data by Mazeh et al. and a phenomenological correction was proposed. Here we tie the observed effect to a particular type of effect, and in the process generalize the popular Sysrem algorithm to include external parameters in a simultaneous solution with the unknown effects. We show that a post-processing scheme based on this algorithm performs well and indeed allows for the detection of new transit-like signals that were not previously detected.
Guyot, Patricia; Ades, A E; Ouwens, Mario J N M; Welton, Nicky J
2012-02-01
The results of Randomized Controlled Trials (RCTs) on time-to-event outcomes that are usually reported are median time to events and Cox Hazard Ratio. These do not constitute the sufficient statistics required for meta-analysis or cost-effectiveness analysis, and their use in secondary analyses requires strong assumptions that may not have been adequately tested. In order to enhance the quality of secondary data analyses, we propose a method which derives from the published Kaplan Meier survival curves a close approximation to the original individual patient time-to-event data from which they were generated. We develop an algorithm that maps from digitised curves back to KM data by finding numerical solutions to the inverted KM equations, using where available information on number of events and numbers at risk. The reproducibility and accuracy of survival probabilities, median survival times and hazard ratios based on reconstructed KM data was assessed by comparing published statistics (survival probabilities, medians and hazard ratios) with statistics based on repeated reconstructions by multiple observers. The validation exercise established there was no material systematic error and that there was a high degree of reproducibility for all statistics. Accuracy was excellent for survival probabilities and medians, for hazard ratios reasonable accuracy can only be obtained if at least numbers at risk or total number of events are reported. The algorithm is a reliable tool for meta-analysis and cost-effectiveness analyses of RCTs reporting time-to-event data. It is recommended that all RCTs should report information on numbers at risk and total number of events alongside KM curves.
Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E
2014-09-16
A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.
Beyramysoltan, Samira; Abdollahi, Hamid; Rajkó, Róbert
2014-05-27
Analytical self-modeling curve resolution (SMCR) methods resolve data sets to a range of feasible solutions using only non-negative constraints. The Lawton-Sylvestre method was the first direct method to analyze a two-component system. It was generalized as a Borgen plot for determining the feasible regions in three-component systems. It seems that a geometrical view is required for considering curve resolution methods, because the complicated (only algebraic) conceptions caused a stop in the general study of Borgen's work for 20 years. Rajkó and István revised and elucidated the principles of existing theory in SMCR methods and subsequently introduced computational geometry tools for developing an algorithm to draw Borgen plots in three-component systems. These developments are theoretical inventions and the formulations are not always able to be given in close form or regularized formalism, especially for geometric descriptions, that is why several algorithms should have been developed and provided for even the theoretical deductions and determinations. In this study, analytical SMCR methods are revised and described using simple concepts. The details of a drawing algorithm for a developmental type of Borgen plot are given. Additionally, for the first time in the literature, equality and unimodality constraints are successfully implemented in the Lawton-Sylvestre method. To this end, a new state-of-the-art procedure is proposed to impose equality constraint in Borgen plots. Two- and three-component HPLC-DAD data set were simulated and analyzed by the new analytical curve resolution methods with and without additional constraints. Detailed descriptions and explanations are given based on the obtained abstract spaces. Copyright © 2014 Elsevier B.V. All rights reserved.
Sengupta, Partho P; Huang, Yen-Min; Bansal, Manish; Ashrafi, Ali; Fisher, Matt; Shameer, Khader; Gall, Walt; Dudley, Joel T
2016-06-01
Associating a patient's profile with the memories of prototypical patients built through previous repeat clinical experience is a key process in clinical judgment. We hypothesized that a similar process using a cognitive computing tool would be well suited for learning and recalling multidimensional attributes of speckle tracking echocardiography data sets derived from patients with known constrictive pericarditis and restrictive cardiomyopathy. Clinical and echocardiographic data of 50 patients with constrictive pericarditis and 44 with restrictive cardiomyopathy were used for developing an associative memory classifier-based machine-learning algorithm. The speckle tracking echocardiography data were normalized in reference to 47 controls with no structural heart disease, and the diagnostic area under the receiver operating characteristic curve of the associative memory classifier was evaluated for differentiating constrictive pericarditis from restrictive cardiomyopathy. Using only speckle tracking echocardiography variables, associative memory classifier achieved a diagnostic area under the curve of 89.2%, which improved to 96.2% with addition of 4 echocardiographic variables. In comparison, the area under the curve of early diastolic mitral annular velocity and left ventricular longitudinal strain were 82.1% and 63.7%, respectively. Furthermore, the associative memory classifier demonstrated greater accuracy and shorter learning curves than other machine-learning approaches, with accuracy asymptotically approaching 90% after a training fraction of 0.3 and remaining flat at higher training fractions. This study demonstrates feasibility of a cognitive machine-learning approach for learning and recalling patterns observed during echocardiographic evaluations. Incorporation of machine-learning algorithms in cardiac imaging may aid standardized assessments and support the quality of interpretations, particularly for novice readers with limited experience. © 2016 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Pavičić, Mladen
2013-04-01
We show that in any quantum direct communication protocol that is based on Ψ and Φ Bell states, an eavesdropper can always tell Ψ from Φ states without altering the transmission in any way in the message mode. This renders all protocols that make use of only one Ψ state and one Φ state completely insecure in the message mode. All four-Bell-state protocols require a revision and this might be of importance for new implementations of entanglement-based cryptographic protocols. The detection rate of an eavesdropper is 25% per control transmission, i.e., a half of the rate in the two-state (ping-pong) protocol. An eavesdropper can detect control probes with certainty in the standard control transmission without a photon in the Alice-to-Bob's travel mode and with near certainty in a transmission with a fake photon in the travel mode. Resending of measured control photons via the travel mode would make an eavesdropper completely invisible.
Bell's facial nerve palsy in pregnancy: a clinical review.
Hussain, Ahsen; Nduka, Charles; Moth, Philippa; Malhotra, Raman
2017-05-01
Bell's facial nerve palsy (FNP) during pregnancy and the puerperium can present significant challenges for the patient and clinician. Presentation and prognosis can be worse in this group of patients. This article reviews the background, manifestation and management options of FNP. In particular, it focuses on the controversies that exist regarding corticosteroid use during pregnancy and outlines approaches to diagnosis and treatment. Based on this review, we recommend an early evidence-based approach using guidelines derived from non-pregnant populations. This includes assessment for atypical causes, a multidisciplinary input and early introduction of corticosteroids to limit progression and improve prognosis.
NASA Astrophysics Data System (ADS)
Chang, Ya-Ting; Chang, Li-Chiu; Chang, Fi-John
2005-04-01
To bridge the gap between academic research and actual operation, we propose an intelligent control system for reservoir operation. The methodology includes two major processes, the knowledge acquired and implemented, and the inference system. In this study, a genetic algorithm (GA) and a fuzzy rule base (FRB) are used to extract knowledge based on the historical inflow data with a design objective function and on the operating rule curves respectively. The adaptive network-based fuzzy inference system (ANFIS) is then used to implement the knowledge, to create the fuzzy inference system, and then to estimate the optimal reservoir operation. To investigate its applicability and practicability, the Shihmen reservoir, Taiwan, is used as a case study. For the purpose of comparison, a simulation of the currently used M-5 operating rule curve is also performed. The results demonstrate that (1) the GA is an efficient way to search the optimal input-output patterns, (2) the FRB can extract the knowledge from the operating rule curves, and (3) the ANFIS models built on different types of knowledge can produce much better performance than the traditional M-5 curves in real-time reservoir operation. Moreover, we show that the model can be more intelligent for reservoir operation if more information (or knowledge) is involved.
Concentric Tube Robot Design and Optimization Based on Task and Anatomical Constraints
Bergeles, Christos; Gosline, Andrew H.; Vasilyev, Nikolay V.; Codd, Patrick J.; del Nido, Pedro J.; Dupont, Pierre E.
2015-01-01
Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of pre-curved superelastic tubes and are capable of assuming complex 3D curves. The family of 3D curves that the robot can assume depends on the number, curvatures, lengths and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedureor patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally-compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery. PMID:26380575
Analysis of the glow curve of SrB 4O 7:Dy compounds employing the GOT model
NASA Astrophysics Data System (ADS)
Ortega, F.; Molina, P.; Santiago, M.; Spano, F.; Lester, M.; Caselli, E.
2006-02-01
The glow curve of SrB 4O 7:Dy phosphors has been analysed with the general one trap model (GOT). To solve the differential equation describing the GOT model a novel algorithm has been employed, which reduces significantly the deconvolution time with respect to the time required by usual integration algorithms, such as the Runge-Kutta method.
46 CFR 97.37-7 - General alarm bells.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Markings for Fire and Emergency Equipment, Etc. § 97.37-7 General alarm bells. (a) All general alarm bells shall be identified by red lettering at least 1/2 inch high: “GENERAL ALARM—WHEN BELL RINGS GO TO YOUR...
46 CFR 196.37-7 - General alarm bells.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Markings for Fire and Emergency Equipment, etc. § 196.37-7 General alarm bells. (a) All general alarm bells shall be identified by red lettering at least 1/2 inch high: “GENERAL ALARM—WHEN BELL RINGS GO TO YOUR...
46 CFR 196.37-7 - General alarm bells.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Markings for Fire and Emergency Equipment, etc. § 196.37-7 General alarm bells. (a) All general alarm bells shall be identified by red lettering at least 1/2 inch high: “GENERAL ALARM—WHEN BELL RINGS GO TO YOUR...
46 CFR 97.37-7 - General alarm bells.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Markings for Fire and Emergency Equipment, Etc. § 97.37-7 General alarm bells. (a) All general alarm bells shall be identified by red lettering at least 1/2 inch high: “GENERAL ALARM—WHEN BELL RINGS GO TO YOUR...
46 CFR 196.37-7 - General alarm bells.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Markings for Fire and Emergency Equipment, etc. § 196.37-7 General alarm bells. (a) All general alarm bells shall be identified by red lettering at least 1/2 inch high: “GENERAL ALARM—WHEN BELL RINGS GO TO YOUR...
46 CFR 97.37-7 - General alarm bells.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Markings for Fire and Emergency Equipment, Etc. § 97.37-7 General alarm bells. (a) All general alarm bells shall be identified by red lettering at least 1/2 inch high: “GENERAL ALARM—WHEN BELL RINGS GO TO YOUR...
46 CFR 97.37-7 - General alarm bells.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Markings for Fire and Emergency Equipment, Etc. § 97.37-7 General alarm bells. (a) All general alarm bells shall be identified by red lettering at least 1/2 inch high: “GENERAL ALARM—WHEN BELL RINGS GO TO YOUR...
46 CFR 196.37-7 - General alarm bells.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Markings for Fire and Emergency Equipment, etc. § 196.37-7 General alarm bells. (a) All general alarm bells shall be identified by red lettering at least 1/2 inch high: “GENERAL ALARM—WHEN BELL RINGS GO TO YOUR...
46 CFR 97.37-7 - General alarm bells.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Markings for Fire and Emergency Equipment, Etc. § 97.37-7 General alarm bells. (a) All general alarm bells shall be identified by red lettering at least 1/2 inch high: “GENERAL ALARM—WHEN BELL RINGS GO TO YOUR...
46 CFR 196.37-7 - General alarm bells.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Markings for Fire and Emergency Equipment, etc. § 196.37-7 General alarm bells. (a) All general alarm bells shall be identified by red lettering at least 1/2 inch high: “GENERAL ALARM—WHEN BELL RINGS GO TO YOUR...
Hebbian based learning with winner-take-all for spiking neural networks
NASA Astrophysics Data System (ADS)
Gupta, Ankur; Long, Lyle
2009-03-01
Learning methods for spiking neural networks are not as well developed as the traditional neural networks that widely use back-propagation training. We propose and implement a Hebbian based learning method with winner-take-all competition for spiking neural networks. This approach is spike time dependent which makes it naturally well suited for a network of spiking neurons. Homeostasis with Hebbian learning is implemented which ensures stability and quicker learning. Homeostasis implies that the net sum of incoming weights associated with a neuron remains the same. Winner-take-all is also implemented for competitive learning between output neurons. We implemented this learning rule on a biologically based vision processing system that we are developing, and use layers of leaky integrate and fire neurons. The network when presented with 4 bars (or Gabor filters) of different orientation learns to recognize the bar orientations (or Gabor filters). After training, each output neuron learns to recognize a bar at specific orientation and responds by firing more vigorously to that bar and less vigorously to others. These neurons are found to have bell shaped tuning curves and are similar to the simple cells experimentally observed by Hubel and Wiesel in the striate cortex of cat and monkey.
Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.
Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes amore » straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.« less
Biswas, Soma; Leitao, Samuel; Theillaud, Quentin; Erickson, Blake W; Fantner, Georg E
2018-06-20
Atomic force microscope (AFM) based single molecule force spectroscopy (SMFS) is a valuable tool in biophysics to investigate the ligand-receptor interactions, cell adhesion and cell mechanics. However, the force spectroscopy data analysis needs to be done carefully to extract the required quantitative parameters correctly. Especially the large number of molecules, commonly involved in complex networks formation; leads to very complicated force spectroscopy curves. One therefore, generally characterizes the total dissipated energy over a whole pulling cycle, as it is difficult to decompose the complex force curves into individual single molecule events. However, calculating the energy dissipation directly from the transformed force spectroscopy curves can lead to a significant over-estimation of the dissipated energy during a pulling experiment. The over-estimation of dissipated energy arises from the finite stiffness of the cantilever used for AFM based SMFS. Although this error can be significant, it is generally not compensated for. This can lead to significant misinterpretation of the energy dissipation (up to the order of 30%). In this paper, we show how in complex SMFS the excess dissipated energy caused by the stiffness of the cantilever can be identified and corrected using a high throughput algorithm. This algorithm is then applied to experimental results from molecular networks and cell-adhesion measurements to quantify the improvement in the estimation of the total energy dissipation.
Bell's Inequality: Revolution in Quantum Physics or Just AN Inadequate Mathematical Model?
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
The main aim of this review is to stress the role of mathematical models in physics. The Bell inequality (BI) is often called the "most famous inequality of the 20th century." It is commonly accepted that its violation in corresponding experiments induced a revolution in quantum physics. Unlike "old quantum mechanics" (of Einstein, Schrodinger Bohr, Heisenberg, Pauli, Landau, Fock), "modern quantum mechanics" (of Bell, Aspect, Zeilinger, Shimony, Green-berger, Gisin, Mermin) takes seriously so called quantum non-locality. We will show that the conclusion that one has to give up the realism (i.e., a possibility to assign results of measurements to physical systems) or the locality (i.e., to assume action at a distance) is heavily based on one special mathematical model. This model was invented by A. N. Kolmogorov in 1933. One should pay serious attention to the role of mathematical models in physics. The problems of the realism and locality induced by Bell's argument can be solved by using non-Kolmogorovian probabilistic models. We compare this situation with non-Euclidean geometric models in relativity theory.
Quantum Locality, Rings a Bell?: Bell's Inequality Meets Local Reality and True Determinism
NASA Astrophysics Data System (ADS)
Sánchez-Kuntz, Natalia; Nahmad-Achar, Eduardo
2018-01-01
By assuming a deterministic evolution of quantum systems and taking realism into account, we carefully build a hidden variable theory for Quantum Mechanics (QM) based on the notion of ontological states proposed by 't Hooft (The cellular automaton interpretation of quantum mechanics, arXiv:1405.1548v3, 2015; Springer Open 185, https://doi.org/10.1007/978-3-319-41285-6, 2016). We view these ontological states as the ones embedded with realism and compare them to the (usual) quantum states that represent superpositions, viewing the latter as mere information of the system they describe. Such a deterministic model puts forward conditions for the applicability of Bell's inequality: the usual inequality cannot be applied to the usual experiments. We build a Bell-like inequality that can be applied to the EPR scenario and show that this inequality is always satisfied by QM. In this way we show that QM can indeed have a local interpretation, and thus meet with the causal structure imposed by the Theory of Special Relativity in a satisfying way.
Communication channels secured from eavesdropping via transmission of photonic Bell states
NASA Astrophysics Data System (ADS)
Shimizu, Kaoru; Imoto, Nobuyuki
1999-07-01
This paper proposes a quantum communication scheme for sending a definite binary sequence while confirming the security of the transmission. The scheme is very suitable for sending a ciphertext in a secret-key cryptosystem so that we can detect any eavesdropper who attempts to decipher the key. Thus we can continue to use a secret key unless we detect eavesdropping and the security of a key that is used repeatedly can be enhanced to the level of one-time-pad cryptography. In our scheme, a pair of entangled photon twins is employed as a bit carrier which is encoded in a two-term superposition of four Bell states. Different bases are employed for encoding the binary sequence of a ciphertext and a random test bit. The photon twins are measured with a Bell state analyzer and any bit can be decoded from the resultant Bell state when the receiver is later notified of the coding basis through a classical channel. By opening the positions and the values of test bits, ciphertext can be read and eavesdropping is simultaneously detected.
From Bell Labs to Silicon Valley: A Saga of Technology Transfer, 1954-1961
NASA Astrophysics Data System (ADS)
Riordan, Michael
2009-03-01
Although Bell Telephone Laboratories invented the transistor and developed most of the associated semiconductor technology, the integrated circuit or microchip emerged elsewhere--at Texas Instruments and Fairchild Semiconductor Company. I recount how the silicon technology required to make microchips possible was first developed at Bell Labs in the mid-1950s. Much of it reached the San Francisco Bay Area when transistor pioneer William Shockley left Bell Labs in 1955 to establish the Shockley Semiconductor Laboratory in Mountain View, hiring a team of engineers and scientists to develop and manufacture transistors and related semiconductor devices. But eight of them--including Gordon Moore and Robert Noyce, eventually the co-founders of Intel--resigned en masse in September 1957 to start Fairchild, bringing with them the scientific and technological expertise they had acquired and further developed at Shockley's firm. This event marked the birth of Silicon Valley, both technologically and culturally. By March 1961 the company was marketing its Micrologic integrated circuits, the first commercial silicon microchips, based on the planar processing technique developed at Fairchild by Jean Hoerni.
Scene-based nonuniformity correction with video sequences and registration.
Hardie, R C; Hayat, M M; Armstrong, E; Yasuda, B
2000-03-10
We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of nonuniformity, sufficiently accurate registration may be possible with standard scene-based registration techniques. If the registration is accurate, and motion exists between the frames, then groups of independent detectors can be identified that observe the same irradiance (or true scene value). These detector outputs are averaged to generate estimates of the true scene values. With these scene estimates, and the corresponding observed values through a given detector, a curve-fitting procedure is used to estimate the individual detector response parameters. These can then be used to correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low computational complexity. Experimental results, to illustrate the performance of the algorithm, include the use of visible-range imagery with simulated nonuniformity and infrared imagery with real nonuniformity.
Optimality of semiquantum nonlocality in the presence of high inconclusive rates
Lim, Charles Ci Wen
2016-02-01
Quantum nonlocality is a counterintuitive phenomenon that lies beyond the purview of causal influences. Recently, Bell inequalities have been generalized to the case of quantum inputs, leading to a powerful family of semiquantum Bell inequalities that are capable of detecting any entangled state. We focus on a different problem and investigate how the local indistinguishability of quantum inputs and postselection may affect the requirements to detect semiquantum nonlocality. Moreover, we consider a semiquantum nonlocal game based on locally indistinguishable qubit inputs, and derive its postselected local and quantum bounds by using a connection to the local distinguishability of quantum states.more » Interestingly, we find that the postselected local bound is independent of the measurement efficiency, and the achievable postselected Bell violation increases with decreasing measurement efficiency.« less
Belle-II VXD radiation monitoring and beam abort with sCVD diamond sensors
NASA Astrophysics Data System (ADS)
Adamczyk, K.; Aihara, H.; Angelini, C.; Aziz, T.; Babu, V.; Bacher, S.; Bahinipati, S.; Barberio, E.; Baroncelli, T.; Basith, A. K.; Batignani, G.; Bauer, A.; Behera, P. K.; Bergauer, T.; Bettarini, S.; Bhuyan, B.; Bilka, T.; Bosi, F.; Bosisio, L.; Bozek, A.; Buchsteiner, F.; Casarosa, G.; Ceccanti, M.; Červenkov, D.; Chendvankar, S. R.; Dash, N.; Divekar, S. T.; Doležal, Z.; Dutta, D.; Forti, F.; Friedl, M.; Hara, K.; Higuchi, T.; Horiguchi, T.; Irmler, C.; Ishikawa, A.; Jeon, H. B.; Joo, C.; Kandra, J.; Kang, K. H.; Kato, E.; Kawasaki, T.; Kodyš, P.; Kohriki, T.; Koike, S.; Kolwalkar, M. M.; Kvasnička, P.; Lanceri, L.; Lettenbicher, J.; Mammini, P.; Mayekar, S. N.; Mohanty, G. B.; Mohanty, S.; Morii, T.; Nakamura, K. R.; Natkaniec, Z.; Negishi, K.; Nisar, N. K.; Onuki, Y.; Ostrowicz, W.; Paladino, A.; Paoloni, E.; Park, H.; Pilo, F.; Profeti, A.; Rashevskaya, I.; Rao, K. K.; Rizzo, G.; Rozanska, M.; Sandilya, S.; Sasaki, J.; Sato, N.; Schultschik, S.; Schwanda, C.; Seino, Y.; Shimizu, N.; Stypula, J.; Tanaka, S.; Tanida, K.; Taylor, G. N.; Thalmeier, R.; Thomas, R.; Tsuboyama, T.; Uozumi, S.; Urquijo, P.; Vitale, Lorenzo; Volpi, M.; Watanuki, S.; Watson, I. J.; Webb, J.; Wiechczynski, J.; Williams, S.; Würkner, B.; Yamamoto, H.; Yin, H.; Yoshinobu, T.
2016-07-01
The Belle-II VerteX Detector (VXD) has been designed to improve the performances with respect to Belle and to cope with an unprecedented luminosity of 8 ×1035cm-2s-1 achievable by the SuperKEKB. Special care is needed to monitor both the radiation dose accumulated throughout the life of the experiment and the instantaneous radiation rate, in order to be able to promptly react to sudden spikes for the purpose of protecting the detectors. A radiation monitoring and beam abort system based on single-crystal diamond sensors is now under an active development for the VXD. The sensors will be placed in several key positions in the vicinity of the interaction region. The severe space limitations require a challenging remote readout of the sensors.
NASA Astrophysics Data System (ADS)
Paris, Adrien; André Garambois, Pierre; Calmant, Stéphane; Paiva, Rodrigo; Walter, Collischonn; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Bonnet, Marie-Paule; Seyler, Frédérique; Monnier, Jérôme
2016-04-01
Estimating river discharge for ungauged river reaches from satellite measurements is not straightforward given the nonlinearity of flow behavior with respect to measurable and non measurable hydraulic parameters. As a matter of facts, current satellite datasets do not give access to key parameters such as river bed topography and roughness. A unique set of almost one thousand altimetry-based rating curves was built by fit of ENVISAT and Jason-2 water stages with discharges obtained from the MGB-IPH rainfall-runoff model in the Amazon basin. These rated discharges were successfully validated towards simulated discharges (Ens = 0.70) and in-situ discharges (Ens = 0.71) and are not mission-dependent. The rating curve writes Q = a(Z-Z0)b*sqrt(S), with Z the water surface elevation and S its slope gained from satellite altimetry, a and b power law coefficient and exponent and Z0 the river bed elevation such as Q(Z0) = 0. For several river reaches in the Amazon basin where ADCP measurements are available, the Z0 values are fairly well validated with a relative error lower than 10%. The present contribution aims at relating the identifiability and the physical meaning of a, b and Z0given various hydraulic and geomorphologic conditions. Synthetic river bathymetries sampling a wide range of rivers and inflow discharges are used to perform twin experiments. A shallow water model is run for generating synthetic satellite observations, and then rating curve parameters are determined for each river section thanks to a MCMC algorithm. Thanks to twin experiments, it is shown that rating curve formulation with water surface slope, i.e. closer from Manning equation form, improves parameter identifiability. The compensation between parameters is limited, especially for reaches with little water surface variability. Rating curve parameters are analyzed for riffle and pools for small to large rivers, different river slopes and cross section shapes. It is shown that the river bed elevation Z0is systematically well identified with relative errors on the order of a few %. Eventually, these altimetry-based rating curves provide morphological parameters of river reaches that can be used as inputs into hydraulic models and a priori information that could be useful for SWOT inversion algorithms.
Flexible margin kinematics and vortex formation of Aurelia aurita and Robojelly.
Villanueva, Alex; Vlachos, Pavlos; Priya, Shashank
2014-01-01
The development of a rowing jellyfish biomimetic robot termed as "Robojelly", has led to the discovery of a passive flexible flap located between the flexion point and bell margin on the Aurelia aurita. A comparative analysis of biomimetic robots showed that the presence of a passive flexible flap results in a significant increase in the swimming performance. In this work we further investigate this concept by developing varying flap geometries and comparing their kinematics with A. aurita. It was shown that the animal flap kinematics can be replicated with high fidelity using a passive structure and a flap with curved and tapered geometry gave the most biomimetic performance. A method for identifying the flap location was established by utilizing the bell curvature and the variation of curvature as a function of time. Flaps of constant cross-section and varying lengths were incorporated on the Robojelly to conduct a systematic study of the starting vortex circulation. Circulation was quantified using velocity field measurements obtained from planar Time Resolved Digital Particle Image Velocimetry (TRDPIV). The starting vortex circulation was scaled using a varying orifice model and a pitching panel model. The varying orifice model which has been traditionally considered as the better representation of jellyfish propulsion did not appear to capture the scaling of the starting vortex. In contrast, the pitching panel representation appeared to better scale the governing flow physics and revealed a strong dependence on the flap kinematics and geometry. The results suggest that an alternative description should be considered for rowing jellyfish propulsion, using a pitching panel method instead of the traditional varying orifice model. Finally, the results show the importance of incorporating the entire bell geometry as a function of time in modeling rowing jellyfish propulsion.
An assessment of the BEST procedure to estimate the soil water retention curve
NASA Astrophysics Data System (ADS)
Castellini, Mirko; Di Prima, Simone; Iovino, Massimo
2017-04-01
The Beerkan Estimation of Soil Transfer parameters (BEST) procedure represents a very attractive method to accurately and quickly obtain a complete hydraulic characterization of the soil (Lassabatère et al., 2006). However, further investigations are needed to check the prediction reliability of soil water retention curve (Castellini et al., 2016). Four soils with different physical properties (texture, bulk density, porosity and stoniness) were considered in this investigation. Sites of measurement were located at Palermo University (PAL site) and Villabate (VIL site) in Sicily, Arborea (ARB site) in Sardinia and in Foggia (FOG site), Apulia. For a given site, BEST procedure was applied and the water retention curve was estimated using the available BEST-algorithms (i.e., slope, intercept and steady), and the reference values of the infiltration constants (β=0.6 and γ=0.75) were considered. The water retention curves estimated by BEST were then compared with those obtained in laboratory by the evaporation method (Wind, 1968). About ten experiments were carried out with both methods. A sensitivity analysis of the constants β and γ within their feasible range of variability (0.1<β<1.9 and of 0.61<γ< 0.79) was also carried out for each soil in order to establish: i) the impact of infiltration constants in the three BEST-algorithms on saturated hydraulic conductivity, Ks, soil sorptivity, S and on the retention curve scale parameter, hg; ii) the effectiveness of the three BEST-algorithms in the estimate of the soil water retention curve. Main results of sensitivity analysis showed that S tended to increase for increasing β values and decreasing values of γ for all the BEST-algorithms and soils. On the other hand, Ks tended to decrease for increasing β and γ values. Our results also reveal that: i) BEST-intercept and BEST-steady algorithms yield lower S and higher Ks values than BEST-slope; ii) these algorithms yield also more variable values. For the latter, a higher sensitiveness of these two alternative algorithms to β than for γ was established. The decreasing sensitiveness to γ may lead to a possible lack in the correction of the simplified theoretical description of the parabolic two-dimensional and one-dimensional wetting front along the soil profile (Smettem et al., 1994). This likely resulted in lower S and higher Ks values. Nevertheless, these differences are expected to be negligible for practical applications (Di Prima et al., 2016). On the other hand, the -intercept and -steady algorithms yielded hg values independent from γ, hence, determining water retention curves by these algorithms appears questionable. The linear regression between the soil water retention curves of BEST-slope and BEST-intercept (note that the same result is obtained with BEST-steady, due to a purely analytical reason) vs. lab method showed the following main results: i) the BEST procedure generally tends to underestimate the soil water retention (the exception was the PAL site); depending on the soil and algorithmic, the root mean square differences, RMSD obtained with BEST and lab method ranged between 0.028 cm3/cm3 (VIL, BEST-slope) and 0.082 cm3/cm3(FOG, BEST-intercept/steady); highest RMSD values (0.124-0.140 cm3/cm3) were obtained in the PAL site; ii) depending on the soil, BEST-slope generally determined lowest RMSD values (by a factor of 1.2-2.1); iii) when the whole variability range of β and γ was considered and a different couple of parameters was chosen (in general, extreme values of the parameters), lower RMSD values were detected in three out of four cases for BEST-slope; iv) the negligible observed differences of RMSD however suggest that using the reference values of infiltration constants, does not worsen significantly the soil water retention curve estimation; v) in 25% of considered soils (PAL site), the BEST procedure was not able to reproduce the retention curve of the soil in a sufficiently accurate way. In conclusion, our results showed that the BEST-slope algorithm appeared to yield more accurate estimates of water retention data with reference to three of the four sampled soils. Conversely, determining water retention curves by the -intercept and -steady algorithms may be questionable, since these algorithms overestimated hg yielding independent values of this parameter from the proportionality coefficient γ. (*) The work was supported by the project "STRATEGA, Sperimentazione e TRAsferimento di TEcniche innovative di aGricoltura conservativA", financed by Regione Puglia - Servizio Agricoltura. References Castellini, M., Iovino, M., Pirastru, M., Niedda, M., Bagarello, V., 2016. Use of BEST Procedure to Assess Soil Physical Quality in the Baratz Lake Catchment (Sardinia, Italy). Soil Sci. Soc. Am. J. 80:742-755. doi:10.2136/sssaj2015.11.0389 Di Prima, S., Lassabatere, L., Bagarello, V., Iovino, M., Angulo-Jaramillo, R., 2016. Testing a new automated single ring infiltrometer for Beerkan infiltration experiments. Geoderma 262, 20-34. doi:10.1016/j.geoderma.2015.08.006 Lassabatère, L., Angulo-Jaramillo, R., Soria Ugalde, J.M., Cuenca, R., Braud, I., Haverkamp, R., 2006. Beerkan Estimation of Soil Transfer Parameters through Infiltration Experiments-BEST. Soil Sci. Soc. Am. J. 70:521-532. doi:10.2136/sssaj2005.0026 Smettem, K.R.J., Parlange, J.Y., Ross, P.J., Haverkamp, R., 1994. Three-dimensional analysis of infiltration from the disc infiltrometer: 1. A capillary-based theory. Water Resour. Res. 30, 2925-2929. doi:10.1029/94WR01787 Wind, G.P. 1968. Capillary conductivity data estimated by a simple method. In: Water in the Unsaturated Zone, Proceedings of Wageningen Syposium, June 1966 Vol.1 (eds P.E. Rijtema & H Wassink), pp. 181-191, IASAH, Gentbrugge, Belgium.
Application of self-balanced loading test to socketed pile in weak rock
NASA Astrophysics Data System (ADS)
Cheng, Ye; Gong, Weiming; Dai, Guoliang; Wu, JingKun
2008-11-01
Method of self-balanced loading test differs from the traditional methods of pile test. The key equipment of the test is a cell. The cell specially designed is used to exert load which is placed in pile body. During the test, displacement values of the top plate and the bottom plate of the cell are recorded according to every level of load. So Q-S curves can be obtained. In terms of test results, the bearing capacity of pile can be judged. Equipments of the test are simply and cost of it is low. Under some special conditions, the method will take a great advantage. In Guangxi Province, tertiary mudstone distributes widely which is typical weak rock. It is usually chosen as the bearing stratum of pile foundation. In order to make full use of its high bearing capacity, pile is generally designed as belled pile. Foundations of two high-rise buildings which are close to each other are made up of belled socketed piles in weak rock. To obtain the bearing capacity of the belled socketed pile in weak rock, loading test in situ should be taken since it is not reasonable that experimental compression strength of the mudstone is used for design. The self-balanced loading test was applied to eight piles of two buildings. To get the best test effect, the assembly of cell should be taken different modes in terms of the depth that pile socketed in rock and the dimension of the enlarged toe. The assembly of cells had been taken three modes, and tests were carried on successfully. By the self-balanced loading test, the large bearing capacities of belled socketed piles were obtained. Several key parameters required in design were achieved from the tests. For the data of tests had been analyzed, the bearing performance of pile tip, pile side and whole pile was revealed. It is further realized that the bearing capacity of belled socketed pile in the mudstone will decrease after the mudstone it socketed in has been immerged. Among kinds of mineral ingredient in the mudstone, montmorillonite is much. And in the size composition, content of cosmid is high. For specific surface area of cosmid is large and water intake capacity of it is strong, water content has great effect on strength of the mudstone. Along with water content increasing, strength of the mudstone declines nonlinear apparently. Since effective measures had been taken, the mudstone was prohibited from being immerged during construction. And valuable experience has been accumulated for similar projects construction henceforth.
Pentium Pro inside. 1; A treecode at 430 Gigaflops on ASCI Red
NASA Technical Reports Server (NTRS)
Warren, M. S.; Becker, D. J.; Sterling, T.; Salmon, J. K.; Goda, M. P.
1997-01-01
As an entry for the 1997 Gordon Bell performance prize, we present results from two methods of solving the gravitational N-body problem on the Intel Teraflops system at Sandia National Laboratory (ASCI Red). The first method, an O(N2) algorithm, obtained 635 Gigaflops for a 1 million particle problem on 6800 Pentium Pro processors. The second solution method, a tree-code which scales as O(N log N), sustained 170 Gigaflops over a continuous 9.4 hour period on 4096 processors, integrating the motion of 322 million mutually interacting particles in a cosmology simulation, while saving over 100 Gigabytes of raw data. Additionally, the tree-code sustained 430 Gigaflops on 6800 processors for the first 5 time-steps of that simulation. This tree-code solution is approximately 105 times more efficient than the O(N2) algorithm for this problem. As an entry for the 1997 Gordon Bell price/performance prize, we present two calculations from the disciplines of astrophysics and fluid dynamics. The simulations were performed on two 16 Pentium Pro processor Beowulf-class computers (Loki and Hyglac) constructed entirely from commodity personal computer technology, at a cost of roughly $50k each in September, 1996. The price of an equivalent system in August 1997 is less than $30. At Los Alamos, Loki performed a gravitational tree-code N-body simulation of galaxy formation using 9.75 million particles, which sustained an average of 879 Mflops over a ten day period, and produced roughly 10 Gbytes of raw data.
Effects of shape parameters on the attractiveness of a female body.
Fan, J; Dai, W; Qian, X; Chau, K P; Liu, Q
2007-08-01
Various researchers have suggested that certain anthropometric ratios can be used to measure female body attractiveness, including the waist to hip ratio, Body Mass Index (BMI), and the body volume divided by the square of the height (Volume-Height Index). Based on a wide range of female subjects and virtual images of bodies with different ratios, Volume-Height Index was found to provide the best fit with female body attractiveness, and the effect of Volume-Height Index can be fitted with two half bell-shaped exponential curves with an optimal Volume-Height Index at 14.2 liter/m2. It is suggested that the general trend of the effect of Volume-Height Index may be culturally invariant, but the optimal value of Volume-Height Index may vary from culture to culture. In addition to Volume-Height Index, other body parameters or ratios which reflect body proportions and the traits of feminine characteristics had smaller but significant effects on female body attractiveness, and such effects were stronger at optimum Volume-Height Index.
SeaWiFS Technical Report Series. Volume 29; The SeaWiFS CZCS-Type Pigment Algorithm
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Aiken, James; Moore, Gerald F.; Trees, Charles C.; Clark, Dennis K.
1995-01-01
The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) mission will provide operational ocean color that will be superior to the previous Coastal Zone Color Sensor (CZCS) proof-of-concept mission. An algorithm is needed that exploits the full functionality of SeaWiFS whilst remaining compatible in concept with algorithms used for the CZCS. This document describes the theoretical rationale of radiance band-ratio methods for determining chlorophyll-a and other important biogeochemical parameters, and their implementation for the SeaWIFS mission. Pigment interrelationships are examined to explain the success of the CZCS algorithms. In the context where chlorophyll-a absorbs only weakly at 520 nm, the success of the 520 nm to 550 nm CZCS band ratio needs to be explained. This is explained by showing that in pigment data from a range of oceanic provinces chlorophyll-a (absorbing at less than 490 nm), carotenoids (absorbing at greater than 460 nm), and total pigment are highly correlated. Correlations within pigment groups particularly photoprotectant and photosynthetic carotenoids are less robust. The sources of variability in optical data are examined using the NIMBUS Experiment Team (NET) bio-optical data set and bio-optical model. In both the model and NET data, the majority of the variance in the optical data is attributed to variability in pigment (chlorophyll-a), and total particulates, with less than 5% of the variability resulting from pigment assemblage. The relationships between band ratios and chlorophyll is examined analytically, and a new formulation based on a dual hyperbolic model is suggested which gives a better calibration curve than the conventional log-log linear regression fit. The new calibration curve shows the 490:555 ratio is the best single-band ratio and is the recommended CZCS-type pigment algorithm. Using both the model and NET data, a number of multiband algorithms are developed; the best of which is an algorithm based on the 443:555 and 490:555 ratios. From model data, the form of potential algorithms for other products, such as total particulates and dissolved organic matter (DOM), are suggested.
SeaWiFS Technical Report Series. Volume 29: SeaWiFS CZCS-type pigment algorithm
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Aiken, James; Moore, Gerald F.; Trees, Charles C.; Clark, Dennis K.
1995-01-01
The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) mission will provide operational ocean color that will be superior to the previous Coastal Zone Color Sensor (CZCS) proof-of-concept mission. an algorithm is needed that exploits the full functionality of SeaWiFS whilst remaining compatible in concept with algorithms used for the CZCS. This document describes the theoretical rationale of radiance band-radio methods for determining chlorophyll alpha and other important biogeochemical parameters, and their implementation for the SeaWiFS mission. Pigment interrelationships are examined to explain the success of the CZCS algorithms. In the context where chlorophyll alpha absorbs only weakly at 520 nm, the success of the 520 nm to 550 nm CZCS band ratio needs to be explained. This is explained by showing that in pigment data from a range of oceanic provinces chlorophyll alpha (absorbing at less than 490 nm), carotenoids (absorbing at greater than 460 nm), and total pigment are highly correlated. Correlations within pigment groups particularly photoprotectant and photosynthetic carotenoids are less robust. The sources of variability in optical data re examined using the NIMBUS Experiment Team (NET) bio-optical data set and bio-optical model. In both the model and NET data, the majority of the variance in the optical data is attributed to variability in pigment (chlorophyll alpha, and total particulates, with less than 5% of the variability resulting from pigment assemblage. The relationships between band ratios and chlorophyll is examined analytically, and a new formulation based on a dual hyperbolic model is suggested which gives a better calibration curve than the conventional log-log linear regression fit. The new calibration curve shows that 490:555 ratio is the best single-band ratio and is the recommended CZCS-type pigment algorithm. Using both the model and NET data, a number of multiband algorithms are developed; the best of which is an algorithm based on the 443:555 and 490:555 ratios. From model data, the form of potential algorithms for other products, such as total particulates and dissolved organic matter (DOM), are suggested.
Dynamic cone beam CT angiography of carotid and cerebral arteries using canine model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Weixing; Zhao Binghui; Conover, David
2012-01-15
Purpose: This research is designed to develop and evaluate a flat-panel detector-based dynamic cone beam CT system for dynamic angiography imaging, which is able to provide both dynamic functional information and dynamic anatomic information from one multirevolution cone beam CT scan. Methods: A dynamic cone beam CT scan acquired projections over four revolutions within a time window of 40 s after contrast agent injection through a femoral vein to cover the entire wash-in and wash-out phases. A dynamic cone beam CT reconstruction algorithm was utilized and a novel recovery method was developed to correct the time-enhancement curve of contrast flow.more » From the same data set, both projection-based subtraction and reconstruction-based subtraction approaches were utilized and compared to remove the background tissues and visualize the 3D vascular structure to provide the dynamic anatomic information. Results: Through computer simulations, the new recovery algorithm for dynamic time-enhancement curves was optimized and showed excellent accuracy to recover the actual contrast flow. Canine model experiments also indicated that the recovered time-enhancement curves from dynamic cone beam CT imaging agreed well with that of an IV-digital subtraction angiography (DSA) study. The dynamic vascular structures reconstructed using both projection-based subtraction and reconstruction-based subtraction were almost identical as the differences between them were comparable to the background noise level. At the enhancement peak, all the major carotid and cerebral arteries and the Circle of Willis could be clearly observed. Conclusions: The proposed dynamic cone beam CT approach can accurately recover the actual contrast flow, and dynamic anatomic imaging can be obtained with high isotropic 3D resolution. This approach is promising for diagnosis and treatment planning of vascular diseases and strokes.« less
Localized Principal Component Analysis based Curve Evolution: A Divide and Conquer Approach
Appia, Vikram; Ganapathy, Balaji; Yezzi, Anthony; Faber, Tracy
2014-01-01
We propose a novel localized principal component analysis (PCA) based curve evolution approach which evolves the segmenting curve semi-locally within various target regions (divisions) in an image and then combines these locally accurate segmentation curves to obtain a global segmentation. The training data for our approach consists of training shapes and associated auxiliary (target) masks. The masks indicate the various regions of the shape exhibiting highly correlated variations locally which may be rather independent of the variations in the distant parts of the global shape. Thus, in a sense, we are clustering the variations exhibited in the training data set. We then use a parametric model to implicitly represent each localized segmentation curve as a combination of the local shape priors obtained by representing the training shapes and the masks as a collection of signed distance functions. We also propose a parametric model to combine the locally evolved segmentation curves into a single hybrid (global) segmentation. Finally, we combine the evolution of these semilocal and global parameters to minimize an objective energy function. The resulting algorithm thus provides a globally accurate solution, which retains the local variations in shape. We present some results to illustrate how our approach performs better than the traditional approach with fully global PCA. PMID:25520901
Incremental triangulation by way of edge swapping and local optimization
NASA Technical Reports Server (NTRS)
Wiltberger, N. Lyn
1994-01-01
This document is intended to serve as an installation, usage, and basic theory guide for the two dimensional triangulation software 'HARLEY' written for the Silicon Graphics IRIS workstation. This code consists of an incremental triangulation algorithm based on point insertion and local edge swapping. Using this basic strategy, several types of triangulations can be produced depending on user selected options. For example, local edge swapping criteria can be chosen which minimizes the maximum interior angle (a MinMax triangulation) or which maximizes the minimum interior angle (a MaxMin or Delaunay triangulation). It should be noted that the MinMax triangulation is generally only locally optical (not globally optimal) in this measure. The MaxMin triangulation, however, is both locally and globally optical. In addition, Steiner triangulations can be constructed by inserting new sites at triangle circumcenters followed by edge swapping based on the MaxMin criteria. Incremental insertion of sites also provides flexibility in choosing cell refinement criteria. A dynamic heap structure has been implemented in the code so that once a refinement measure is specified (i.e., maximum aspect ratio or some measure of a solution gradient for the solution adaptive grid generation) the cell with the largest value of this measure is continually removed from the top of the heap and refined. The heap refinement strategy allows the user to specify either the number of cells desired or refine the mesh until all cell refinement measures satisfy a user specified tolerance level. Since the dynamic heap structure is constantly updated, the algorithm always refines the particular cell in the mesh with the largest refinement criteria value. The code allows the user to: triangulate a cloud of prespecified points (sites), triangulate a set of prespecified interior points constrained by prespecified boundary curve(s), Steiner triangulate the interior/exterior of prespecified boundary curve(s), refine existing triangulations based on solution error measures, and partition meshes based on the Cuthill-McKee, spectral, and coordinate bisection strategies.
2010-09-01
concession kiosks, other similar services, and a food court including Taco Bell, Charley’s, Anthony’s, Manchu Wok, and Starbucks . Laundry...other similar services, and a food court including Taco Bell, Charley’s, Anthony’s, Manchu Wok, and Starbucks . Construction would consist of a...For a corporation : By a responsible corporate officer. For the purpose of this Part, a responsible corporate officer means: (i) a president
Association between recovery from Bell's palsy and body mass index.
Choi, S A; Shim, H S; Jung, J Y; Kim, H J; Kim, S H; Byun, J Y; Park, M S; Yeo, S G
2017-06-01
Although many factors have been found to be involved in recovery from Bell's palsy, no study has investigated the association between recovery from Bell's palsy and obesity. This study therefore evaluated the association between recovery from Bell's palsy and body mass index (BMI). Subjects were classified into five groups based on BMI (kg/m 2 ). Demographic and clinical characteristics were compared among these groups. Assessed factors included sex, age, time from paralysis to visiting a hospital, the presence of comorbidities such as diabetes mellitus and hypertension, degree of initial facial nerve paralysis by House-Brackmann (H-B) grade and neurophysiological testing, and final recovery rate. Based on BMI, 37 subjects were classified as underweight, 169 as normal weight, 140 as overweight, 155 as obese and 42 as severely obese. Classification of the degree of initial facial nerve paralysis as moderate or severe, according to H-B grade and electroneurography, showed no difference in severity of initial facial paralysis among the five groups (P > 0.05). However, the final recovery rate was significantly higher in the normal weight than in the underweight or obese group (P < 0.05). Obesity or underweight had no effect on the severity of initial facial paralysis, but the final recovery rate was lower in the obese and underweight groups than in the normal group. © 2016 John Wiley & Sons Ltd.
Genealogy of John and Charles Bell: their relationship with the children of Charles Shaw of Ayr.
Kaufman, M
2005-11-01
The Reverend William Bell had six children who survived infancy. Two of his sons entered the legal profession and two other sons became distinguished anatomists and surgeons--John Bell, said for 20 years to have been the leading operating surgeon in Britain and throughout the world--and Sir Charles Bell, possibly the most distinguished anatomist and physiologist of his day. Information is not known about the fifth son or their sister. Charles Shaw, a lawyer of Ayr, had four sons and two daughters who survived infancy. Two of his sons, John and Alexander, became anatomists and later surgeons at the Middlesex Hospital, and both worked closely with Charles Bell at the Great Windmill Street School of Anatomy. His third son entered the law and his fourth son became a distinguished soldier. The two daughters of Charles Shaw married into the Bell family: Barbara married George Joseph Bell and Marion married Mr (later Sir) Charles Bell.
Algorithms that Defy the Gravity of Learning Curve
2017-04-28
three nearest neighbour-based anomaly detectors, i.e., an ensemble of nearest neigh- bours, a recent nearest neighbour-based ensemble method called iNNE...streams. Note that the change in sample size does not alter the geometrical data characteristics discussed here. 3.1 Experimental Methodology ...need to be answered. 3.6 Comparison with conventional ensemble methods Given the theoretical results, the third aim of this project (i.e., identify the
NASA Technical Reports Server (NTRS)
Croom, Delwin R; Huffman, Jarrett K
1957-01-01
Results of an investigation at low speeds to determine the gust-alleviation capabilities (reduction in lift-curve slope) of spoilers and deflectors on a 35 degree swept-wing model of high aspect ratio and on a 1/4-scale model of the X-5 airplane with 35 degree swept wings indicate that deflector and spoiler-deflector types of controls can be designed to provide considerable gust alleviation for a swept-wing airplane while still maintaining stability and control.
On the stability analysis of sharply stratified shear flows
NASA Astrophysics Data System (ADS)
Churilov, Semyon
2018-05-01
When the stability of a sharply stratified shear flow is studied, the density profile is usually taken stepwise and a weak stratification between pycnoclines is neglected. As a consequence, in the instability domain of the flow two-sided neutral curves appear such that the waves corresponding to them are neutrally stable, whereas the neighboring waves on either side of the curve are unstable, in contrast with the classical result of Miles (J Fluid Mech 16:209-227, 1963) who proved that in stratified flows unstable oscillations can be only on one side of the neutral curve. In the paper, the contradiction is resolved and changes in the flow stability pattern under transition from a model stepwise to a continuous density profile are analyzed. On this basis, a simple self-consistent algorithm is proposed for studying the stability of sharply stratified shear flows with a continuous density variation and an arbitrary monotonic velocity profile without inflection points. Because our calculations and the algorithm are both based on the method of stability analysis (Churilov J Fluid Mech 539:25-55, 2005; ibid, 617, 301-326, 2008), which differs essentially from usually used, the paper starts with a brief review of the method and results obtained with it.
Clinical practice guideline: Bell's Palsy executive summary.
Baugh, Reginald F; Basura, Gregory J; Ishii, Lisa E; Schwartz, Seth R; Drumheller, Caitlin Murray; Burkholder, Rebecca; Deckard, Nathan A; Dawson, Cindy; Driscoll, Colin; Gillespie, M Boyd; Gurgel, Richard K; Halperin, John; Khalid, Ayesha N; Kumar, Kaparaboyna Ashok; Micco, Alan; Munsell, Debra; Rosenbaum, Steven; Vaughan, William
2013-11-01
The American Academy of Otolaryngology-Head and Neck Surgery Foundation (AAO-HNSF) has published a supplement to this issue featuring the new Clinical Practice Guideline: Bell's Palsy. To assist in implementing the guideline recommendations, this article summarizes the rationale, purpose, and key action statements. The 11 recommendations developed encourage accurate and efficient diagnosis and treatment and, when applicable, facilitate patient follow-up to address the management of long-term sequelae or evaluation of new or worsening symptoms not indicative of Bell's palsy. There are myriad treatment options for Bell's palsy; some controversy exists regarding the effectiveness of several of these options, and there are consequent variations in care. In addition, there are numerous diagnostic tests available that are used in the evaluation of patients with Bell's palsy. Many of these tests are of questionable benefit in Bell's palsy. Furthermore, while patients with Bell's palsy enter the health care system with facial paresis/paralysis as a primary complaint, not all patients with facial paresis/paralysis have Bell's palsy. It is a concern that patients with alternative underlying etiologies may be misdiagnosed or have an unnecessary delay in diagnosis. All of these quality concerns provide an important opportunity for improvement in the diagnosis and management of patients with Bell's palsy.
Computational Fluid Dynamics Simulation of Dual Bell Nozzle Film Cooling
NASA Technical Reports Server (NTRS)
Braman, Kalen; Garcia, Christian; Ruf, Joseph; Bui, Trong
2015-01-01
Marshall Space Flight Center (MSFC) and Armstrong Flight Research Center (AFRC) are working together to advance the technology readiness level (TRL) of the dual bell nozzle concept. Dual bell nozzles are a form of altitude compensating nozzle that consists of two connecting bell contours. At low altitude the nozzle flows fully in the first, relatively lower area ratio, nozzle. The nozzle flow separates from the wall at the inflection point which joins the two bell contours. This relatively low expansion results in higher nozzle efficiency during the low altitude portion of the launch. As ambient pressure decreases with increasing altitude, the nozzle flow will expand to fill the relatively large area ratio second nozzle. The larger area ratio of the second bell enables higher Isp during the high altitude and vacuum portions of the launch. Despite a long history of theoretical consideration and promise towards improving rocket performance, dual bell nozzles have yet to be developed for practical use and have seen only limited testing. One barrier to use of dual bell nozzles is the lack of control over the nozzle flow transition from the first bell to the second bell during operation. A method that this team is pursuing to enhance the controllability of the nozzle flow transition is manipulation of the film coolant that is injected near the inflection between the two bell contours. Computational fluid dynamics (CFD) analysis is being run to assess the degree of control over nozzle flow transition generated via manipulation of the film injection. A cold flow dual bell nozzle, without film coolant, was tested over a range of simulated altitudes in 2004 in MSFC's nozzle test facility. Both NASA centers have performed a series of simulations of that dual bell to validate their computational models. Those CFD results are compared to the experimental results within this paper. MSFC then proceeded to add film injection to the CFD grid of the dual bell nozzle. A series of nozzle pressure ratios and film coolant flow rates are investigated to determine the effect of the film injection on the nozzle flow transition behavior. The results of this CFD study of a dual bell with film injection are presented in this paper.
NASA Astrophysics Data System (ADS)
Ling, Jun
Achieving reliable underwater acoustic communications (UAC) has long been recognized as a challenging problem owing to the scarce bandwidth available and the reverberant spread in both time and frequency domains. To pursue high data rates, we consider a multi-input multi-output (MIMO) UAC system, and our focus is placed on two main issues regarding a MIMO UAC system: (1) channel estimation, which involves the design of the training sequences and the development of a reliable channel estimation algorithm, and (2) symbol detection, which requires interference cancelation schemes due to simultaneous transmission from multiple transducers. To enhance channel estimation performance, we present a cyclic approach for designing training sequences with good auto- and cross-correlation properties, and a channel estimation algorithm called the iterative adaptive approach (IAA). Sparse channel estimates can be obtained by combining IAA with the Bayesian information criterion (BIC). Moreover, we present sparse learning via iterative minimization (SLIM) and demonstrate that SLIM gives similar performance to IAA but at a much lower computational cost. Furthermore, an extension of the SLIM algorithm is introduced to estimate the sparse and frequency modulated acoustic channels. The extended algorithm is referred to as generalization of SLIM (GoSLIM). Regarding symbol detection, a linear minimum mean-squared error based detection scheme, called RELAX-BLAST, which is a combination of vertical Bell Labs layered space-time (V-BLAST) algorithm and the cyclic principle of the RELAX algorithm, is presented and it is shown that RELAX-BLAST outperforms V-BLAST. We show that RELAX-BLAST can be implemented efficiently by making use of the conjugate gradient method and diagonalization properties of circulant matrices. This fast implementation approach requires only simple fast Fourier transform operations and facilitates parallel implementations. The effectiveness of the proposed MIMO schemes is verified by both computer simulations and experimental results obtained by analyzing the measurements acquired in multiple in-water experiments.
Han, Fang; Wang, Zhijie; Fan, Hong
2017-01-01
This paper proposed a new method to determine the neuronal tuning curves for maximum information efficiency by computing the optimum firing rate distribution. Firstly, we proposed a general definition for the information efficiency, which is relevant to mutual information and neuronal energy consumption. The energy consumption is composed of two parts: neuronal basic energy consumption and neuronal spike emission energy consumption. A parameter to model the relative importance of energy consumption is introduced in the definition of the information efficiency. Then, we designed a combination of exponential functions to describe the optimum firing rate distribution based on the analysis of the dependency of the mutual information and the energy consumption on the shape of the functions of the firing rate distributions. Furthermore, we developed a rapid algorithm to search the parameter values of the optimum firing rate distribution function. Finally, we found with the rapid algorithm that a combination of two different exponential functions with two free parameters can describe the optimum firing rate distribution accurately. We also found that if the energy consumption is relatively unimportant (important) compared to the mutual information or the neuronal basic energy consumption is relatively large (small), the curve of the optimum firing rate distribution will be relatively flat (steep), and the corresponding optimum tuning curve exhibits a form of sigmoid if the stimuli distribution is normal. PMID:28270760
Waveform fitting and geometry analysis for full-waveform lidar feature extraction
NASA Astrophysics Data System (ADS)
Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu
2016-10-01
This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.
Fluorescence spectroscopy for diagnosis of squamous intraepithelial lesions of the cervix.
Mitchell, M F; Cantor, S B; Ramanujam, N; Tortolero-Luna, G; Richards-Kortum, R
1999-03-01
To calculate receiver operating characteristic (ROC) curves for fluorescence spectroscopy in order to measure its performance in the diagnosis of squamous intraepithelial lesions (SILs) and to compare these curves with those for other diagnostic methods: colposcopy, cervicography, speculoscopy, Papanicolaou smear screening, and human papillomavirus (HPV) testing. Data from our previous clinical study were used to calculate ROC curves for fluorescence spectroscopy. Curves for other techniques were calculated from other investigators' reports. To identify these, a MEDLINE search for articles published from 1966 to 1996 was carried out, using the search terms "colposcopy," "cervicoscopy," "cervicography," "speculoscopy," "Papanicolaou smear," "HPV testing," "fluorescence spectroscopy," and "polar probe" in conjunction with the terms "diagnosis," "positive predictive value," "negative predictive value," and "receiver operating characteristic curve." We found 270 articles, from which articles were selected if they reported results of studies involving high-disease-prevalence populations, reported findings of studies in which colposcopically directed biopsy was the criterion standard, and included sufficient data for recalculation of the reported sensitivities and specificities. We calculated ROC curves for fluorescence spectroscopy using Bayesian and neural net algorithms. A meta-analytic approach was used to calculate ROC curves for the other techniques. Areas under the curves were calculated. Fluorescence spectroscopy using the neural net algorithm had the highest area under the ROC curve, followed by fluorescence spectroscopy using the Bayesian algorithm, followed by colposcopy, the standard diagnostic technique. Cervicography, Papanicolaou smear screening, and HPV testing performed comparably with each other but not as well as fluorescence spectroscopy and colposcopy. Fluorescence spectroscopy performs better than colposcopy and other techniques in the diagnosis of SILs. Because it also permits real-time diagnosis and has the potential of being used by inexperienced health care personnel, this technology holds bright promise.
Laplacian scale-space behavior of planar curve corners.
Zhang, Xiaohong; Qu, Ying; Yang, Dan; Wang, Hongxing; Kymer, Jeff
2015-11-01
Scale-space behavior of corners is important for developing an efficient corner detection algorithm. In this paper, we analyze the scale-space behavior with the Laplacian of Gaussian (LoG) operator on a planar curve which constructs Laplacian Scale Space (LSS). The analytical expression of a Laplacian Scale-Space map (LSS map) is obtained, demonstrating the Laplacian Scale-Space behavior of the planar curve corners, based on a newly defined unified corner model. With this formula, some Laplacian Scale-Space behavior is summarized. Although LSS demonstrates some similarities to Curvature Scale Space (CSS), there are still some differences. First, no new extreme points are generated in the LSS. Second, the behavior of different cases of a corner model is consistent and simple. This makes it easy to trace the corner in a scale space. At last, the behavior of LSS is verified in an experiment on a digital curve.
LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS
Einstein, Daniel R.; Dyedov, Vladimir
2010-01-01
Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546
Evaluation of machine learning algorithms for improved risk assessment for Down's syndrome.
Koivu, Aki; Korpimäki, Teemu; Kivelä, Petri; Pahikkala, Tapio; Sairanen, Mikko
2018-05-04
Prenatal screening generates a great amount of data that is used for predicting risk of various disorders. Prenatal risk assessment is based on multiple clinical variables and overall performance is defined by how well the risk algorithm is optimized for the population in question. This article evaluates machine learning algorithms to improve performance of first trimester screening of Down syndrome. Machine learning algorithms pose an adaptive alternative to develop better risk assessment models using the existing clinical variables. Two real-world data sets were used to experiment with multiple classification algorithms. Implemented models were tested with a third, real-world, data set and performance was compared to a predicate method, a commercial risk assessment software. Best performing deep neural network model gave an area under the curve of 0.96 and detection rate of 78% with 1% false positive rate with the test data. Support vector machine model gave area under the curve of 0.95 and detection rate of 61% with 1% false positive rate with the same test data. When compared with the predicate method, the best support vector machine model was slightly inferior, but an optimized deep neural network model was able to give higher detection rates with same false positive rate or similar detection rate but with markedly lower false positive rate. This finding could further improve the first trimester screening for Down syndrome, by using existing clinical variables and a large training data derived from a specific population. Copyright © 2018 Elsevier Ltd. All rights reserved.
A mesh partitioning algorithm for preserving spatial locality in arbitrary geometries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nivarti, Girish V., E-mail: g.nivarti@alumni.ubc.ca; Salehi, M. Mahdi; Bushe, W. Kendal
2015-01-15
Highlights: •An algorithm for partitioning computational meshes is proposed. •The Morton order space-filling curve is modified to achieve improved locality. •A spatial locality metric is defined to compare results with existing approaches. •Results indicate improved performance of the algorithm in complex geometries. -- Abstract: A space-filling curve (SFC) is a proximity preserving linear mapping of any multi-dimensional space and is widely used as a clustering tool. Equi-sized partitioning of an SFC ignores the loss in clustering quality that occurs due to inaccuracies in the mapping. Often, this results in poor locality within partitions, especially for the conceptually simple, Morton ordermore » curves. We present a heuristic that improves partition locality in arbitrary geometries by slicing a Morton order curve at points where spatial locality is sacrificed. In addition, we develop algorithms that evenly distribute points to the extent possible while maintaining spatial locality. A metric is defined to estimate relative inter-partition contact as an indicator of communication in parallel computing architectures. Domain partitioning tests have been conducted on geometries relevant to turbulent reactive flow simulations. The results obtained highlight the performance of our method as an unsupervised and computationally inexpensive domain partitioning tool.« less
Bell's palsy: a summary of current evidence and referral algorithm.
Glass, Graeme E; Tzafetta, Kallirroi
2014-12-01
Spontaneous idiopathic facial nerve (Bell's) palsy leaves residual hemifacial weakness in 29% which is severe and disfiguring in over half of these cases. Acute medical management remains the best way to improve outcomes. Reconstructive surgery can improve long term disfigurement. However, acute and surgical options are time-dependent. As family practitioners see, on average, one case every 2 years, a summary of this condition based on common clinical questions may improve acute management and guide referral for those who need specialist input. We formulated a series of clinical questions likely to be of use to family practitioners on encountering this condition and sought evidence from the literature to answer them. The lifetime risk is 1 in 60, and is more common in pregnancy and diabetes mellitus. Patients often present with facial pain or paraesthesia, altered taste and intolerance to loud noise in addition to facial droop. It is probably caused by ischaemic compression of the facial nerve within the meatal segment of the facial canal probably as a result of viral inflammation. When given early, high dose corticosteroids can improve outcomes. Neither antiviral therapy nor other adjuvant therapies are supported by evidence. As the facial muscles remain viable re-innervation targets for up to 2 years, late referrals require more complex reconstructions. Early recognition, steroid therapy and early referral for facial reanimation (when the diagnosis is secure) are important features of good management when encountering these complex cases. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Tightness of correlation inequalities with no quantum violation
NASA Astrophysics Data System (ADS)
Ramanathan, Ravishankar; Quintino, Marco Túlio; Sainz, Ana Belén; Murta, Gláucia; Augusiak, Remigiusz
2017-01-01
We study the faces of the set of quantum correlations, i.e., the Bell and noncontextuality inequalities without any quantum violation. First, we investigate the question of whether every proper (facet-defining) Bell inequality for two parties, other than the trivial ones from positivity, normalization, and no-signaling, can be violated by quantum correlations, i.e., whether the classical Bell polytope or the smaller correlation polytope share any facets with their respective quantum sets. To do this, we develop a recently derived bound on the quantum value of linear games based on the norms of game matrices to give a simple sufficient condition to identify linear games with no quantum advantage. Additionally we show how this bound can be extended to the general class of unique games. We then show that the paradigmatic examples of correlation Bell inequalities with no quantum violation, namely the nonlocal computation games, do not constitute facet-defining Bell inequalities, not even for the correlation polytope. We also extend this to an arbitrary prime number of outcomes for a specific class of these games. We then study the faces in the simplest Clauser-Horne-Shimony-Holt Bell scenario of binary dichotomic measurements, and identify edges in the set of quantum correlations in this scenario. Finally, we relate the noncontextual polytope of single-party correlation inequalities with the cut polytope CUT(∇ G ) , where G denotes the compatibility graph of observables in the contextuality scenario and ∇ G denotes the suspension graph of G . We observe that there exist facet-defining noncontextuality inequalities with no quantum violation, and furthermore that this set of inequalities is beyond those implied by the consistent exclusivity principle.
Bell XP–59A Airacomet in the Altitude Wind Tunnel
1944-03-21
The secret test of the Bell YP–59A Airacomet in the spring of 1944 was the first investigation in the National Advisory Committee for Aeronautics (NACA) Aircraft Engine Research Laboratory’s new Altitude Wind Tunnel (AWT). The Airacomet, powered by two General Electric I–A centrifugal turbojets, was the first US jet aircraft. The Airacomet’s 290-miles per hour speed, however, was dwarfed by the German Messerschmitt Me-262 Schwalbe’s 540 miles per hour. In 1941 and 1942 General Electric built the first US jet engines based on technical drawings from British engineer Frank Whittle. Bell Aircraft was contracted to produce an airframe to incorporate the new engines. The result was the Bell XP–59A Airacomet. The aircraft made its first flight over Muroc Lake, California, on October 2, 1942. The aircraft continued to struggle over the next year and the NACA was asked to test it in the new AWT. A Bell YP–59A was flown from the Bell plant in Buffalo to Cleveland by Bob Stanley, who had piloted the first successful flight of the XP–59A at Muroc in 1942. The wing tips and tail were cut from the aircraft so that it would fit into the AWT’s test section. The study first analyzed the engines in their original configuration and then implemented a boundary layer removal duct, a new nacelle inlet, and new cooling seals. Tests of the modified version showed that the improved airflow distribution increased the I–16’s performance by 25 percent. Despite the improved speed, the aircraft was not stable enough to be used in combat, and the design was soon abandoned.
Cui, Han; Chen, Yi; Zhong, Weizheng; Yu, Haibo; Li, Zhifeng; He, Yuhai; Yu, Wenlong; Jin, Lei
2016-01-01
Bell's palsy is a kind of peripheral neural disease that cause abrupt onset of unilateral facial weakness. In the pathologic study, it was evidenced that ischemia of facial nerve at the affected side of face existed in Bell's palsy patients. Since the direction of facial nerve blood flow is primarily proximal to distal, facial skin microcirculation would also be affected after the onset of Bell's palsy. Therefore, monitoring the full area of facial skin microcirculation would help to identify the condition of Bell's palsy patients. In this study, a non-invasive, real time and full field imaging technology - laser speckle imaging (LSI) technology was applied for measuring facial skin blood perfusion distribution of Bell's palsy patients. 85 participants with different stage of Bell's palsy were included. Results showed that Bell's palsy patients' facial skin perfusion of affected side was lower than that of the normal side at the region of eyelid, and that the asymmetric distribution of the facial skin perfusion between two sides of eyelid is positively related to the stage of the disease (P < 0.001). During the recovery, the perfusion of affected side of eyelid was increasing to nearly the same with the normal side. This study was a novel application of LSI in evaluating the facial skin perfusion of Bell's palsy patients, and we discovered that the facial skin blood perfusion could reflect the stage of Bell's palsy, which suggested that microcirculation should be investigated in patients with this neurological deficit. It was also suggested LSI as potential diagnostic tool for Bell's palsy.
46 CFR 108.623 - General alarm bell switch.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false General alarm bell switch. 108.623 Section 108.623... AND EQUIPMENT Equipment Markings and Instructions § 108.623 General alarm bell switch. Each general alarm bell switch must be marked “GENERAL ALARM” on a plate or other firm noncorrosive backing. ...
46 CFR 108.623 - General alarm bell switch.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false General alarm bell switch. 108.623 Section 108.623... AND EQUIPMENT Equipment Markings and Instructions § 108.623 General alarm bell switch. Each general alarm bell switch must be marked “GENERAL ALARM” on a plate or other firm noncorrosive backing. ...
46 CFR 108.623 - General alarm bell switch.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false General alarm bell switch. 108.623 Section 108.623... AND EQUIPMENT Equipment Markings and Instructions § 108.623 General alarm bell switch. Each general alarm bell switch must be marked “GENERAL ALARM” on a plate or other firm noncorrosive backing. ...
46 CFR 108.623 - General alarm bell switch.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false General alarm bell switch. 108.623 Section 108.623... AND EQUIPMENT Equipment Markings and Instructions § 108.623 General alarm bell switch. Each general alarm bell switch must be marked “GENERAL ALARM” on a plate or other firm noncorrosive backing. ...
46 CFR 108.623 - General alarm bell switch.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false General alarm bell switch. 108.623 Section 108.623... AND EQUIPMENT Equipment Markings and Instructions § 108.623 General alarm bell switch. Each general alarm bell switch must be marked “GENERAL ALARM” on a plate or other firm noncorrosive backing. ...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-18
..., Illinois; Notification of Proposed Production Activity, Easton-Bell Sports, Inc. (Sports Equipment), Rantoul, Illinois Easton-Bell Sports, Inc. (Easton-Bell Sports) submitted a notification of proposed..., 2013. A separate application for subzone status at the Easton-Bell Sports facility was submitted and is...
Using Machine Learning To Predict Which Light Curves Will Yield Stellar Rotation Periods
NASA Astrophysics Data System (ADS)
Agüeros, Marcel; Teachey, Alexander
2018-01-01
Using time-domain photometry to reliably measure a solar-type star's rotation period requires that its light curve have a number of favorable characteristics. The probability of recovering a period will be a non-linear function of these light curve features, which are either astrophysical in nature or set by the observations. We employ standard machine learning algorithms (artificial neural networks and random forests) to predict whether a given light curve will produce a robust rotation period measurement from its Lomb-Scargle periodogram. The algorithms are trained and validated using salient statistics extracted from both simulated light curves and their corresponding periodograms, and we apply these classifiers to the most recent Intermediate Palomar Transient Factory (iPTF) data release. With this pipeline, we anticipate measuring rotation periods for a significant fraction of the ∼4x108 stars in the iPTF footprint.
A hand tracking algorithm with particle filter and improved GVF snake model
NASA Astrophysics Data System (ADS)
Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe
2017-07-01
To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.
Combinatorial Algorithms for Portfolio Optimization Problems - Case of Risk Moderate Investor
NASA Astrophysics Data System (ADS)
Juarna, A.
2017-03-01
Portfolio optimization problem is a problem of finding optimal combination of n stocks from N ≥ n available stocks that gives maximal aggregate return and minimal aggregate risk. In this paper given N = 43 from the IDX (Indonesia Stock Exchange) group of the 45 most-traded stocks, known as the LQ45, with p = 24 data of monthly returns for each stock, spanned over interval 2013-2014. This problem actually is a combinatorial one where its algorithm is constructed based on two considerations: risk moderate type of investor and maximum allowed correlation coefficient between every two eligible stocks. The main outputs resulted from implementation of the algorithms is a multiple curve of three portfolio’s attributes, e.g. the size, the ratio of return to risk, and the percentage of negative correlation coefficient for every two chosen stocks, as function of maximum allowed correlation coefficient between each two stocks. The output curve shows that the portfolio contains three stocks with ratio of return to risk at 14.57 if the maximum allowed correlation coefficient between every two eligible stocks is negative and contains 19 stocks with maximum allowed correlation coefficient 0.17 to get maximum ratio of return to risk at 25.48.
Dynamic Appliances Scheduling in Collaborative MicroGrids System
Bilil, Hasnae; Aniba, Ghassane; Gharavi, Hamid
2017-01-01
In this paper a new approach which is based on a collaborative system of MicroGrids (MG’s), is proposed to enable household appliance scheduling. To achieve this, appliances are categorized into flexible and non-flexible Deferrable Loads (DL’s), according to their electrical components. We propose a dynamic scheduling algorithm where users can systematically manage the operation of their electric appliances. The main challenge is to develop a flattening function calculus (reshaping) for both flexible and non-flexible DL’s. In addition, implementation of the proposed algorithm would require dynamically analyzing two successive multi-objective optimization (MOO) problems. The first targets the activation schedule of non-flexible DL’s and the second deals with the power profiles of flexible DL’s. The MOO problems are resolved by using a fast and elitist multi-objective genetic algorithm (NSGA-II). Finally, in order to show the efficiency of the proposed approach, a case study of a collaborative system that consists of 40 MG’s registered in the load curve for the flattening program has been developed. The results verify that the load curve can indeed become very flat by applying the proposed scheduling approach. PMID:28824226
Exploring Algorithms for Stellar Light Curves With TESS
NASA Astrophysics Data System (ADS)
Buzasi, Derek
2018-01-01
The Kepler and K2 missions have produced tens of thousands of stellar light curves, which have been used to measure rotation periods, characterize photometric activity levels, and explore phenomena such as differential rotation. The quasi-periodic nature of rotational light curves, combined with the potential presence of additional periodicities not due to rotation, complicates the analysis of these time series and makes characterization of uncertainties difficult. A variety of algorithms have been used for the extraction of rotational signals, including autocorrelation functions, discrete Fourier transforms, Lomb-Scargle periodograms, wavelet transforms, and the Hilbert-Huang transform. In addition, in the case of K2 a number of different pipelines have been used to produce initial detrended light curves from the raw image frames.In the near future, TESS photometry, particularly that deriving from the full-frame images, will dramatically further expand the number of such light curves, but details of the pipeline to be used to produce photometry from the FFIs remain under development. K2 data offers us an opportunity to explore the utility of different reduction and analysis tool combinations applied to these astrophysically important tasks. In this work, we apply a wide range of algorithms to light curves produced by a number of popular K2 pipeline products to better understand the advantages and limitations of each approach and provide guidance for the most reliable and most efficient analysis of TESS stellar data.
Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm
NASA Astrophysics Data System (ADS)
Küchlin, Stephan; Jenny, Patrick
2018-06-01
Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.
A Multialgorithm Approach to Land Surface Modeling of Suspended Sediment in the Colorado Front Range
Stewart, J. R.; Kasprzyk, J. R.; Rajagopalan, B.; Minear, J. T.; Raseman, W. J.
2017-01-01
Abstract A new paradigm of simulating suspended sediment load (SSL) with a Land Surface Model (LSM) is presented here. Five erosion and SSL algorithms were applied within a common LSM framework to quantify uncertainties and evaluate predictability in two steep, forested catchments (>1,000 km2). The algorithms were chosen from among widely used sediment models, including empirically based: monovariate rating curve (MRC) and the Modified Universal Soil Loss Equation (MUSLE); stochastically based: the Load Estimator (LOADEST); conceptually based: the Hydrologic Simulation Program—Fortran (HSPF); and physically based: the Distributed Hydrology Soil Vegetation Model (DHSVM). The algorithms were driven by the hydrologic fluxes and meteorological inputs generated from the Variable Infiltration Capacity (VIC) LSM. A multiobjective calibration was applied to each algorithm and optimized parameter sets were validated over an excluded period, as well as in a transfer experiment to a nearby catchment to explore parameter robustness. Algorithm performance showed consistent decreases when parameter sets were applied to periods with greatly differing SSL variability relative to the calibration period. Of interest was a joint calibration of all sediment algorithm and streamflow parameters simultaneously, from which trade‐offs between streamflow performance and partitioning of runoff and base flow to optimize SSL timing were noted, decreasing the flexibility and robustness of the streamflow to adapt to different time periods. Parameter transferability to another catchment was most successful in more process‐oriented algorithms, the HSPF and the DHSVM. This first‐of‐its‐kind multialgorithm sediment scheme offers a unique capability to portray acute episodic loading while quantifying trade‐offs and uncertainties across a range of algorithm structures. PMID:29399268
Bron, Esther E; Smits, Marion; van der Flier, Wiesje M; Vrenken, Hugo; Barkhof, Frederik; Scheltens, Philip; Papma, Janne M; Steketee, Rebecca M E; Méndez Orellana, Carolina; Meijboom, Rozanna; Pinto, Madalena; Meireles, Joana R; Garrett, Carolina; Bastos-Leite, António J; Abdulkadir, Ahmed; Ronneberger, Olaf; Amoroso, Nicola; Bellotti, Roberto; Cárdenas-Peña, David; Álvarez-Meza, Andrés M; Dolph, Chester V; Iftekharuddin, Khan M; Eskildsen, Simon F; Coupé, Pierrick; Fonov, Vladimir S; Franke, Katja; Gaser, Christian; Ledig, Christian; Guerrero, Ricardo; Tong, Tong; Gray, Katherine R; Moradi, Elaheh; Tohka, Jussi; Routier, Alexandre; Durrleman, Stanley; Sarica, Alessia; Di Fatta, Giuseppe; Sensi, Francesco; Chincarini, Andrea; Smith, Garry M; Stoyanov, Zhivko V; Sørensen, Lauge; Nielsen, Mads; Tangaro, Sabina; Inglese, Paolo; Wachinger, Christian; Reuter, Martin; van Swieten, John C; Niessen, Wiro J; Klein, Stefan
2015-05-01
Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n=30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org. Copyright © 2015 Elsevier Inc. All rights reserved.