NASA Astrophysics Data System (ADS)
Chu, Xiaowen; Li, Bo; Chlamtac, Imrich
2002-07-01
Sparse wavelength conversion and appropriate routing and wavelength assignment (RWA) algorithms are the two key factors in improving the blocking performance in wavelength-routed all-optical networks. It has been shown that the optimal placement of a limited number of wavelength converters in an arbitrary mesh network is an NP complete problem. There have been various heuristic algorithms proposed in the literature, in which most of them assume that a static routing and random wavelength assignment RWA algorithm is employed. However, the existing work shows that fixed-alternate routing and dynamic routing RWA algorithms can achieve much better blocking performance. Our study in this paper further demonstrates that the wavelength converter placement and RWA algorithms are closely related in the sense that a well designed wavelength converter placement mechanism for a particular RWA algorithm might not work well with a different RWA algorithm. Therefore, the wavelength converter placement and the RWA have to be considered jointly. The objective of this paper is to investigate the wavelength converter placement problem under fixed-alternate routing algorithm and least-loaded routing algorithm. Under the fixed-alternate routing algorithm, we propose a heuristic algorithm called Minimum Blocking Probability First (MBPF) algorithm for wavelength converter placement. Under the least-loaded routing algorithm, we propose a heuristic converter placement algorithm called Weighted Maximum Segment Length (WMSL) algorithm. The objective of the converter placement algorithm is to minimize the overall blocking probability. Extensive simulation studies have been carried out over three typical mesh networks, including the 14-node NSFNET, 19-node EON and 38-node CTNET. We observe that the proposed algorithms not only outperform existing wavelength converter placement algorithms by a large margin, but they also can achieve almost the same performance comparing with full wavelength conversion under the same RWA algorithm.
NASA Astrophysics Data System (ADS)
Luo, Yanting; Zhang, Yongjun; Gu, Wanyi
2009-11-01
In large dynamic networks it is extremely difficult to maintain accurate routing information on all network nodes. The existing studies have illustrated the impact of imprecise state information on the performance of dynamic routing and wavelength assignment (RWA) algorithms. An algorithm called Bypass Based Optical Routing (BBOR) proposed by Xavier Masip-Bruin et al can reduce the effects of having inaccurate routing information in networks operating under the wavelength-continuity constraint. Then they extended the BBOR mechanism (for convenience it's called EBBOR mechanism below) to be applied to the networks with sparse and limited wavelength conversion. But it only considers the characteristic of wavelength conversion in the step of computing the bypass-paths so that its performance may decline with increasing the degree of wavelength translation (this concept will be explained in the section of introduction again). We will demonstrate the issue through theoretical analysis and introduce a novel algorithm which modifies both the lightpath selection and the bypass-paths computation in comparison to EBBOR algorithm. Simulations show that the Modified EBBOR (MEBBOR) algorithm improves the blocking performance significantly in optical networks with Conversion Capability.
Wavelength assignment algorithm considering the state of neighborhood links for OBS networks
NASA Astrophysics Data System (ADS)
Tanaka, Yu; Hirota, Yusuke; Tode, Hideki; Murakami, Koso
2005-10-01
Recently, Optical WDM technology is introduced into backbone networks. On the other hand, as the future optical switching scheme, Optical Burst Switching (OBS) systems become a realistic solution. OBS systems do not consider buffering in intermediate nodes. Thus, it is an important issue to avoid overlapping wavelength reservation between partially interfered paths. To solve this problem, so far, the wavelength assignment scheme which has priority management tables has been proposed. This method achieves the reduction of burst blocking probability. However, this priority management table requires huge memory space. In this paper, we propose a wavelength assignment algorithm that reduces both the number of priority management tables and burst blocking probability. To reduce priority management tables, we allocate and manage them for each link. To reduce burst blocking probability, our method announces information about the change of their priorities to intermediate nodes. We evaluate its performance in terms of the burst blocking probability and the reduction rate of priority management tables.
Performance evaluation of distributed wavelength assignment in WDM optical networks
NASA Astrophysics Data System (ADS)
Hashiguchi, Tomohiro; Wang, Xi; Morikawa, Hiroyuki; Aoyama, Tomonori
2004-04-01
In WDM wavelength routed networks, prior to a data transfer, a call setup procedure is required to reserve a wavelength path between the source-destination node pairs. A distributed approach to a connection setup can achieve a very high speed, while improving the reliability and reducing the implementation cost of the networks. However, along with many advantages, several major challenges have been posed by the distributed scheme in how the management and allocation of wavelength could be efficiently carried out. In this thesis, we apply a distributed wavelength assignment algorithm named priority based wavelength assignment (PWA) that was originally proposed for the use in burst switched optical networks to the problem of reserving wavelengths of path reservation protocols in the distributed control optical networks. Instead of assigning wavelengths randomly, this approach lets each node select the "safest" wavelengths based on the information of wavelength utilization history, thus unnecessary future contention is prevented. The simulation results presented in this paper show that the proposed protocol can enhance the performance of the system without introducing any apparent drawbacks.
Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin
2015-10-19
The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.
NASA Astrophysics Data System (ADS)
Hu, Peigang; Jin, Yaohui; Zhang, Chunlei; He, Hao; Hu, WeiSheng
2005-02-01
The increasing switching capacity brings the optical node with considerable complexity. Due to the limitation in cost and technology, an optical node is often designed with partial switching capability and partial resource sharing. It means that the node is of blocking to some extent, for example multi-granularity switching node, which in fact is a structure using pass wavelength to reduce the dimension of OXC, and partial sharing wavelength converter (WC) OXC. It is conceivable that these blocking nodes will have great effects on the problem of routing and wavelength assignment. Some previous works studied the blocking case, partial WC OXC, using complicated wavelength assignment algorithm. But the complexities of these schemes decide them to be not in practice in real networks. In this paper, we propose a new scheme based on the node blocking state advertisement to reduce the retry or rerouting probability and improve the efficiency of routing in the networks with blocking nodes. In the scheme, node blocking state are advertised to the other nodes in networks, which will be used for subsequent route calculation to find a path with lowest blocking probability. The performance of the scheme is evaluated using discrete event model in 14-node NSFNET, all the nodes of which employ a kind of partial sharing WC OXC structure. In the simulation, a simple First-Fit wavelength assignment algorithm is used. The simulation results demonstrate that the new scheme considerably reduces the retry or rerouting probability in routing process.
NASA Astrophysics Data System (ADS)
Iyer, Sridhar
2016-12-01
The ever-increasing global Internet traffic will inevitably lead to a serious upgrade of the current optical networks' capacity. The legacy infrastructure can be enhanced not only by increasing the capacity but also by adopting advance modulation formats, having increased spectral efficiency at higher data rate. In a transparent mixed-line-rate (MLR) optical network, different line rates, on different wavelengths, can coexist on the same fiber. Migration to data rates higher than 10 Gbps requires the implementation of phase modulation schemes. However, the co-existing on-off keying (OOK) channels cause critical physical layer impairments (PLIs) to the phase modulated channels, mainly due to cross-phase modulation (XPM), which in turn limits the network's performance. In order to mitigate this effect, a more sophisticated PLI-Routing and Wavelength Assignment (PLI-RWA) scheme needs to be adopted. In this paper, we investigate the critical impairment for each data rate and the way it affects the quality of transmission (QoT). In view of the aforementioned, we present a novel dynamic PLI-RWA algorithm for MLR optical networks. The proposed algorithm is compared through simulations with the shortest path and minimum hop routing schemes. The simulation results show that performance of the proposed algorithm is better than the existing schemes.
Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli
2018-01-01
In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.
NASA Astrophysics Data System (ADS)
Wang, Fu; Liu, Bo; Zhang, Lijia; Jin, Feifei; Zhang, Qi; Tian, Qinghua; Tian, Feng; Rao, Lan; Xin, Xiangjun
2017-03-01
The wavelength-division multiplexing passive optical network (WDM-PON) is a potential technology to carry multiple services in an optical access network. However, it has the disadvantages of high cost and an immature technique for users. A software-defined WDM/time-division multiplexing PON was proposed to meet the requirements of high bandwidth, high performance, and multiple services. A reasonable and effective uplink dynamic bandwidth allocation algorithm was proposed. A controller with dynamic wavelength and slot assignment was introduced, and a different optical dynamic bandwidth management strategy was formulated flexibly for services of different priorities according to the network loading. The simulation compares the proposed algorithm with the interleaved polling with adaptive cycle time algorithm. The algorithm shows better performance in average delay, throughput, and bandwidth utilization. The results show that the delay is reduced to 62% and the throughput is improved by 35%.
Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference.
Park, Hyoung-Jun; Song, Minho
2008-10-29
The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method.
NASA Astrophysics Data System (ADS)
Iyer, Sridhar
2015-06-01
With the ever-increasing traffic demands, infrastructure of the current 10 Gbps optical network needs to be enhanced. Further, since the energy crisis is gaining increasing concerns, new research topics need to be devised and technological solutions for energy conservation need to be investigated. In all-optical mixed line rate (MLR) network, feasibility of a lightpath is determined by the physical layer impairment (PLI) accumulation. Contrary to PLI-aware routing and wavelength assignment (PLIA-RWA) algorithm applicable for a 10 Gbps wavelength-division multiplexed (WDM) network, a new Routing, Wavelength, Modulation format assignment (RWMFA) algorithm is required for the MLR optical network. With the rapid growth of energy consumption in Information and Communication Technologies (ICT), recently, lot of attention is being devoted toward "green" ICT solutions. This article presents a review of different RWMFA (PLIA-RWA) algorithms for MLR networks, and surveys the most relevant research activities aimed at minimizing energy consumption in optical networks. In essence, this article presents a comprehensive and timely survey on a growing field of research, as it covers most aspects of MLR and energy-driven optical networks. Hence, the author aims at providing a comprehensive reference for the growing base of researchers who will work on MLR and energy-driven optical networks in the upcoming years. Finally, the article also identifies several open problems for future research.
Multicasting for all-optical multifiber networks
NASA Astrophysics Data System (ADS)
Kã¶Ksal, Fatih; Ersoy, Cem
2007-02-01
All-optical wavelength-routed WDM WANs can support the high bandwidth and the long session duration requirements of the application scenarios such as interactive distance learning or on-line diagnosis of patients simultaneously in different hospitals. However, multifiber and limited sparse light splitting and wavelength conversion capabilities of switches result in a difficult optimization problem. We attack this problem using a layered graph model. The problem is defined as a k-edge-disjoint degree-constrained Steiner tree problem for routing and fiber and wavelength assignment of k multicasts. A mixed integer linear programming formulation for the problem is given, and a solution using CPLEX is provided. However, the complexity of the problem grows quickly with respect to the number of edges in the layered graph, which depends on the number of nodes, fibers, wavelengths, and multicast sessions. Hence, we propose two heuristics layered all-optical multicast algorithm [(LAMA) and conservative fiber and wavelength assignment (C-FWA)] to compare with CPLEX, existing work, and unicasting. Extensive computational experiments show that LAMA's performance is very close to CPLEX, and it is significantly better than existing work and C-FWA for nearly all metrics, since LAMA jointly optimizes routing and fiber-wavelength assignment phases compared with the other candidates, which attack the problem by decomposing two phases. Experiments also show that important metrics (e.g., session and group blocking probability, transmitter wavelength, and fiber conversion resources) are adversely affected by the separation of two phases. Finally, the fiber-wavelength assignment strategy of C-FWA (Ex-Fit) uses wavelength and fiber conversion resources more effectively than the First Fit.
Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference
Park, Hyoung-Jun; Song, Minho
2008-01-01
The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method. PMID:27873898
An improved least cost routing approach for WDM optical network without wavelength converters
NASA Astrophysics Data System (ADS)
Bonani, Luiz H.; Forghani-elahabad, Majid
2016-12-01
Routing and wavelength assignment (RWA) problem has been an attractive problem in optical networks, and consequently several algorithms have been proposed in the literature to solve this problem. The most known techniques for the dynamic routing subproblem are fixed routing, fixed-alternate routing, and adaptive routing methods. The first one leads to a high blocking probability (BP) and the last one includes a high computational complexity and requires immense backing from the control and management protocols. The second one suggests a trade-off between performance and complexity, and hence we consider it to improve in our work. In fact, considering the RWA problem in a wavelength routed optical network with no wavelength converter, an improved technique is proposed for the routing subproblem in order to decrease the BP of the network. Based on fixed-alternate approach, the first k shortest paths (SPs) between each node pair is determined. We then rearrange the SPs according to a newly defined cost for the links and paths. Upon arriving a connection request, the sorted paths are consecutively checked for an available wavelength according to the most-used technique. We implement our proposed algorithm and the least-hop fixed-alternate algorithm to show how the rearrangement of SPs contributes to a lower BP in the network. The numerical results demonstrate the efficiency of our proposed algorithm in comparison with the others, considering different number of available wavelengths.
NASA Astrophysics Data System (ADS)
Khan, Akhtar Nawaz
2017-11-01
Currently, analytical models are used to compute approximate blocking probabilities in opaque and all-optical WDM networks with the homogeneous link capacities. Existing analytical models can also be extended to opaque WDM networking with heterogeneous link capacities due to the wavelength conversion at each switch node. However, existing analytical models cannot be utilized for all-optical WDM networking with heterogeneous structure of link capacities due to the wavelength continuity constraint and unequal numbers of wavelength channels on different links. In this work, a mathematical model is extended for computing approximate network blocking probabilities in heterogeneous all-optical WDM networks in which the path blocking is dominated by the link along the path with fewer number of wavelength channels. A wavelength assignment scheme is also proposed for dynamic traffic, termed as last-fit-first wavelength assignment, in which a wavelength channel with maximum index is assigned first to a lightpath request. Due to heterogeneous structure of link capacities and the wavelength continuity constraint, the wavelength channels with maximum indexes are utilized for minimum hop routes. Similarly, the wavelength channels with minimum indexes are utilized for multi-hop routes between source and destination pairs. The proposed scheme has lower blocking probability values compared to the existing heuristic for wavelength assignments. Finally, numerical results are computed in different network scenarios which are approximately equal to values obtained from simulations. Since January 2016, he is serving as Head of Department and an Assistant Professor in the Department of Electrical Engineering at UET, Peshawar-Jalozai Campus, Pakistan. From May 2013 to June 2015, he served Department of Telecommunication Engineering as an Assistant Professor at UET, Peshawar-Mardan Campus, Pakistan. He also worked as an International Internship scholar in the Fukuda Laboratory, National Institute of Informatics, Tokyo, Japan on the topic large-scale simulation for internet topology analysis. His research interests include design and analysis of optical WDM networks, network algorithms, network routing, and network resource optimization problems.
NASA Astrophysics Data System (ADS)
Joo, Seong-Soon; Nam, Hyun-Soon; Lim, Chang-Kyu
2003-08-01
With the rapid growth of the Optical Internet, high capacity pipes is finally destined to support end-to-end IP on the WDM optical network. Newly launched 2D MEMS optical switching module in the market supports that expectations of upcoming a transparent optical cross-connect in the network have encouraged the field applicable research on establishing real all-optical transparent network. To open up a customer-driven bandwidth services, design of the optical transport network becomes more challenging task in terms of optimal network resource usage. This paper presents a practical approach to finding a route and wavelength assignment for wavelength routed all-optical network, which has λ-plane OXC switches and wavelength converters, and supports that optical paths are randomly set up and released by dynamic wavelength provisioning to create bandwidth between end users with timescales on the order of seconds or milliseconds. We suggest three constraints to make the RWA problem become more practical one on deployment for wavelength routed all-optical network in network view: limitation on maximum hop of a route within bearable optical network impairments, limitation on minimum hops to travel before converting a wavelength, and limitation on calculation time to find all routes for connections requested at once. We design the NRCD (Normalized Resource and Constraints for All-Optical Network RWA Design) algorithm for the Tera OXC: network resource for a route is calculated by the number of internal switching paths established in each OXC nodes on the route, and is normalized by ratio of number of paths established and number of paths equipped in a node. We show that it fits for the RWA algorithm of the wavelength routed all-optical network through real experiments on the distributed objects platform.
NASA Astrophysics Data System (ADS)
Zhao, Jijun; Zhang, Nawa; Ren, Danping; Hu, Jinhua
2017-12-01
The recently proposed flexible optical network can provide more efficient accommodation of multiple data rates than the current wavelength-routed optical networks. Meanwhile, the energy efficiency has also been a hot topic because of the serious energy consumption problem. In this paper, the energy efficiency problem of flexible optical networks with physical-layer impairments constraint is studied. We propose a combined impairment-aware and energy-efficient routing and spectrum assignment (RSA) algorithm based on the link availability, in which the impact of power consumption minimization on signal quality is considered. By applying the proposed algorithm, the connection requests are established on a subset of network topology, reducing the number of transitions from sleep to active state. The simulation results demonstrate that our proposed algorithm can improve the energy efficiency and spectrum resources utilization with the acceptable blocking probability and average delay.
Research on Segmentation Monitoring Control of IA-RWA Algorithm with Probe Flow
NASA Astrophysics Data System (ADS)
Ren, Danping; Guo, Kun; Yao, Qiuyan; Zhao, Jijun
2018-04-01
The impairment-aware routing and wavelength assignment algorithm with probe flow (P-IA-RWA) can make an accurate estimation for the transmission quality of the link when the connection request comes. But it also causes some problems. The probe flow data introduced in the P-IA-RWA algorithm can result in the competition for wavelength resources. In order to reduce the competition and the blocking probability of the network, a new P-IA-RWA algorithm with segmentation monitoring-control mechanism (SMC-P-IA-RWA) is proposed. The algorithm would reduce the holding time of network resources for the probe flow. It segments the candidate path suitably for the data transmitting. And the transmission quality of the probe flow sent by the source node will be monitored in the endpoint of each segment. The transmission quality of data can also be monitored, so as to make the appropriate treatment to avoid the unnecessary probe flow. The simulation results show that the proposed SMC-P-IA-RWA algorithm can effectively reduce the blocking probability. It brings a better solution to the competition for resources between the probe flow and the main data to be transferred. And it is more suitable for scheduling control in the large-scale network.
Stefan, Sarah E; Ehsan, Mohammad; Pearson, Wright L; Aksenov, Alexander; Boginski, Vladimir; Bendiak, Brad; Eyler, John R
2011-11-15
Data mining algorithms have been used to analyze the infrared multiple photon dissociation (IRMPD) patterns of gas-phase lithiated disaccharide isomers irradiated with either a line-tunable CO(2) laser or a free electron laser (FEL). The IR fragmentation patterns over the wavelength range of 9.2-10.6 μm have been shown in earlier work to correlate uniquely with the asymmetry at the anomeric carbon in each disaccharide. Application of data mining approaches for data analysis allowed unambiguous determination of the anomeric carbon configurations for each disaccharide isomer pair using fragmentation data at a single wavelength. In addition, the linkage positions were easily assigned. This combination of wavelength-selective IRMPD and data mining offers a powerful and convenient tool for differentiation of structurally closely related isomers, including those of gas-phase carbohydrate complexes.
NASA Astrophysics Data System (ADS)
Woradit, Kampol; Guyot, Matthieu; Vanichchanunt, Pisit; Saengudomlert, Poompat; Wuttisittikulkij, Lunchakorn
While the problem of multicast routing and wavelength assignment (MC-RWA) in optical wavelength division multiplexing (WDM) networks has been investigated, relatively few researchers have considered network survivability for multicasting. This paper provides an optimization framework to solve the MC-RWA problem in a multi-fiber WDM network that can recover from a single-link failure with shared protection. Using the light-tree (LT) concept to support multicast sessions, we consider two protection strategies that try to reduce service disruptions after a link failure. The first strategy, called light-tree reconfiguration (LTR) protection, computes a new multicast LT for each session affected by the failure. The second strategy, called optical branch reconfiguration (OBR) protection, tries to restore a logical connection between two adjacent multicast members disconnected by the failure. To solve the MC-RWA problem optimally, we propose an integer linear programming (ILP) formulation that minimizes the total number of fibers required for both working and backup traffic. The ILP formulation takes into account joint routing of working and backup traffic, the wavelength continuity constraint, and the limited splitting degree of multicast-capable optical cross-connects (MC-OXCs). After showing some numerical results for optimal solutions, we propose heuristic algorithms that reduce the computational complexity and make the problem solvable for large networks. Numerical results suggest that the proposed heuristic yields efficient solutions compared to optimal solutions obtained from exact optimization.
Next-Generation WDM Network Design and Routing
NASA Astrophysics Data System (ADS)
Tsang, Danny H. K.; Bensaou, Brahim
2003-08-01
Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical Burst Switching - Support of Multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule Paper Submission Deadline: November 1, 2003 Notification to Authors: January 15, 2004 Final Manuscripts to Publisher: February 15, 2004 Publication of Focus Issue: February/March 2004
Next-Generation WDM Network Design and Routing
NASA Astrophysics Data System (ADS)
Tsang, Danny H. K.; Bensaou, Brahim
2003-10-01
Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical burst switching - Support of multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule - Paper Submission Deadline: November 1, 2003 - Notification to Authors: January 15, 2004 - Final Manuscripts to Publisher: February 15, 2004 - Publication of Focus Issue: February/March 2004
Next-Generation WDM Network Design and Routing
NASA Astrophysics Data System (ADS)
Tsang, Danny H. K.; Bensaou, Brahim
2003-09-01
Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical burst switching - Support of multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule - Paper Submission Deadline: November 1, 2003 - Notification to Authors: January 15, 2004 - Final Manuscripts to Publisher: February 15, 2004 - Publication of Focus Issue: February/March 2004
Dynamic multicast routing scheme in WDM optical network
NASA Astrophysics Data System (ADS)
Zhu, Yonghua; Dong, Zhiling; Yao, Hong; Yang, Jianyong; Liu, Yibin
2007-11-01
During the information era, the Internet and the service of World Wide Web develop rapidly. Therefore, the wider and wider bandwidth is required with the lower and lower cost. The demand of operation turns out to be diversified. Data, images, videos and other special transmission demands share the challenge and opportunity with the service providers. Simultaneously, the electrical equipment has approached their limit. So the optical communication based on the wavelength division multiplexing (WDM) and the optical cross-connects (OXCs) shows great potentials and brilliant future to build an optical network based on the unique technical advantage and multi-wavelength characteristic. In this paper, we propose a multi-layered graph model with inter-path between layers to solve the problem of multicast routing wavelength assignment (RWA) contemporarily by employing an efficient graph theoretic formulation. And at the same time, an efficient dynamic multicast algorithm named Distributed Message Copying Multicast (DMCM) mechanism is also proposed. The multicast tree with minimum hops can be constructed dynamically according to this proposed scheme.
NASA Astrophysics Data System (ADS)
Chen, Chunfeng; Liu, Hua; Fan, Ge
2005-02-01
In this paper we consider the problem of designing a network of optical cross-connects(OXCs) to provide end-to-end lightpath services to label switched routers (LSRs). Like some previous work, we select the number of OXCs as our objective. Compared with the previous studies, we take into account the fault-tolerant characteristic of logical topology. First of all, using a Prufer number randomly generated, we generate a tree. By adding some edges to the tree, we can obtain a physical topology which consists of a certain number of OXCs and fiber links connecting OXCs. It is notable that we for the first time limit the number of layers of the tree produced according to the method mentioned above. Then we design the logical topologies based on the physical topologies mentioned above. In principle, we will select the shortest path in addition to some consideration on the load balancing of links and the limitation owing to the SRLG. Notably, we implement the routing algorithm for the nodes in increasing order of the degree of the nodes. With regarding to the problem of the wavelength assignment, we adopt the heuristic algorithm of the graph coloring commonly used. It is clear our problem is computationally intractable especially when the scale of the network is large. We adopt the taboo search algorithm to find the near optimal solution to our objective. We present numerical results for up to 1000 LSRs and for a wide range of system parameters such as the number of wavelengths supported by each fiber link and traffic. The results indicate that it is possible to build large-scale optical networks with rich connectivity in a cost-effective manner, using relatively few but properly dimensioned OXCs.
A novel wavelength availability advertisement based ASON routing protocol implementation
NASA Astrophysics Data System (ADS)
Li, Jian; Liu, Juan; Zhang, Jie; Gu, Wanyi
2005-11-01
A novel wavelength availability advertisement based ASON routing protocol implementation is proposed in this paper which is derived from Open Shortest Path First protocol (OSPF) version 2. It can be applied to ASON network with a single control domain and can be easily extended to support routing in the multi-domain scenarios. Two new types of link state advertisement (LSA) are suggested for disseminating wavelength availability and network topology information. The OSPF mechanisms are inherited to ensure that the routing messages are delivered more reliably and converged more quickly while with fewer overheads. The topology auto discovery is realized through LSA flooding interacting with auto neighbor discovery using Link Management Protocol. The new LSA formats are given and how the link state database (LSD) is comprised is described. The new data structures proposed include topology resource list, adjacency list and route table. Then we analyze the differences of ASON in link state exchange, routing information flooding procedure, flushing procedure and new resources participating, i.e. new links or nodes join in an existing ASON. The link or node failure and recovery effect and how to deal with them are settled as well. In order to adopt different Routing and Wavelength Assignment (RWA) algorithms, a standard and efficient interface is designed. After extensive simulation we give the numerical analysis and come to the following conclusions: wavelength availability information flooding Convergence Time is about 30 milliseconds and it is not affected by RWA algorithms and the call traffic load; routing Protocol Average Overhead rises linearly with the increase of traffic load; Average Connection Setup Time decreases with the increase of traffic load because of the decrease of Average Routing Distance of the successfully lightpaths; Wavelength availability advertisement can greatly promote the blocking performance of ASON in relatively low traffic load; ASON operator can make a good trade off between the wavelength availability advertisement Protocol Average Overhead and Blocking Probability by adopting and adjusting the routing update triggers; and the last is that wavelength availability advertisement throughout the optical network is applicable and our ASON routing protocol implementation could be applied in ASON when its scale is not too large and if the calls do not arrive and leave the network in a too frequent pace.
Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F
2014-03-24
Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.
Wavelength routing beyond the standard graph coloring approach
NASA Astrophysics Data System (ADS)
Blankenhorn, Thomas
2004-04-01
When lightpaths are routed in the planning stage of transparent optical networks, the textbook approach is to use algorithms that try to minimize the overall number of wavelengths used in the . We demonstrate that this method cannot be expected to minimize actual costs when the marginal cost of instlling more wavelengths is a declining function of the number of wavelengths already installed, as is frequently the case. We further demonstrate how cost optimization can theoretically be improved with algorithms based on Prim"s algorithm. Finally, we test this theory with simulaion on a series of actual network topologies, which confirm the theoretical analysis.
A new algorithm for reliable and general NMR resonance assignment.
Schmidt, Elena; Güntert, Peter
2012-08-01
The new FLYA automated resonance assignment algorithm determines NMR chemical shift assignments on the basis of peak lists from any combination of multidimensional through-bond or through-space NMR experiments for proteins. Backbone and side-chain assignments can be determined. All experimental data are used simultaneously, thereby exploiting optimally the redundancy present in the input peak lists and circumventing potential pitfalls of assignment strategies in which results obtained in a given step remain fixed input data for subsequent steps. Instead of prescribing a specific assignment strategy, the FLYA resonance assignment algorithm requires only experimental peak lists and the primary structure of the protein, from which the peaks expected in a given spectrum can be generated by applying a set of rules, defined in a straightforward way by specifying through-bond or through-space magnetization transfer pathways. The algorithm determines the resonance assignment by finding an optimal mapping between the set of expected peaks that are assigned by definition but have unknown positions and the set of measured peaks in the input peak lists that are initially unassigned but have a known position in the spectrum. Using peak lists obtained by purely automated peak picking from the experimental spectra of three proteins, FLYA assigned correctly 96-99% of the backbone and 90-91% of all resonances that could be assigned manually. Systematic studies quantified the impact of various factors on the assignment accuracy, namely the extent of missing real peaks and the amount of additional artifact peaks in the input peak lists, as well as the accuracy of the peak positions. Comparing the resonance assignments from FLYA with those obtained from two other existing algorithms showed that using identical experimental input data these other algorithms yielded significantly (40-142%) more erroneous assignments than FLYA. The FLYA resonance assignment algorithm thus has the reliability and flexibility to replace most manual and semi-automatic assignment procedures for NMR studies of proteins.
Two-dimensional priority-based dynamic resource allocation algorithm for QoS in WDM/TDM PON networks
NASA Astrophysics Data System (ADS)
Sun, Yixin; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Rao, Lan
2018-01-01
Wavelength division multiplexing/time division multiplexing (WDM/TDM) passive optical networks (PON) is being viewed as a promising solution for delivering multiple services and applications. The hybrid WDM / TDM PON uses the wavelength and bandwidth allocation strategy to control the distribution of the wavelength channels in the uplink direction, so that it can ensure the high bandwidth requirements of multiple Optical Network Units (ONUs) while improving the wavelength resource utilization. Through the investigation of the presented dynamic bandwidth allocation algorithms, these algorithms can't satisfy the requirements of different levels of service very well while adapting to the structural characteristics of mixed WDM / TDM PON system. This paper introduces a novel wavelength and bandwidth allocation algorithm to efficiently utilize the bandwidth and support QoS (Quality of Service) guarantees in WDM/TDM PON. Two priority based polling subcycles are introduced in order to increase system efficiency and improve system performance. The fixed priority polling subcycle and dynamic priority polling subcycle follow different principles to implement wavelength and bandwidth allocation according to the priority of different levels of service. A simulation was conducted to study the performance of the priority based polling in dynamic resource allocation algorithm in WDM/TDM PON. The results show that the performance of delay-sensitive services is greatly improved without degrading QoS guarantees for other services. Compared with the traditional dynamic bandwidth allocation algorithms, this algorithm can meet bandwidth needs of different priority traffic class, achieve low loss rate performance, and ensure real-time of high priority traffic class in terms of overall traffic on the network.
Efficient traffic grooming in SONET/WDM BLSR Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awwal, A S; Billah, A B; Wang, B
2004-04-02
In this paper, we study traffic grooming in SONET/WDM BLSR networks under the uniform all-to-all traffic model with an objective to reduce total network costs (wavelength and electronic multiplexing costs), in particular, to minimize the number of ADMs while using the optimal number of wavelengths. We derive a new tighter lower bound for the number of wavelengths when the number of nodes is a multiple of 4. We show that this lower bound is achievable. All previous ADM lower bounds except perhaps that in were derived under the assumption that the magnitude of the traffic streams (r) is one unitmore » (r = 1) with respect to the wavelength capacity granularity g. We then derive new, more general and tighter lower bounds for the number of ADMs subject to that the optimal number of wavelengths is used, and propose heuristic algorithms (circle construction algorithm and circle grooming algorithm) that try to minimize the number of ADMs while using the optimal number of wavelengths in BLSR networks. Both the bounds and algorithms are applicable to any value of r and for different wavelength granularity g. Performance evaluation shows that wherever applicable, our lower bounds are at least as good as existing bounds and are much tighter than existing ones in many cases. Our proposed heuristic grooming algorithms perform very well with traffic streams of larger magnitude. The resulting number of ADMs required is very close to the corresponding lower bounds derived in this paper.« less
System engineering approach to GPM retrieval algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, C. R.; Chandrasekar, V.
2004-01-01
System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Groundmore » validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do calculated at each bin, the rain rate can then be calculated based on a suitable rain-rate model. This paper develops a system engineering interface to the retrieval algorithms while remaining cognizant of system engineering issues so that it can be used to bridge the divide between algorithm physics an d overall mission requirements. Additionally, in line with the systems approach, a methodology is developed such that the measurement requirements pass through the retrieval model and other subsystems and manifest themselves as measurement and other system constraints. A systems model has been developed for the retrieval algorithm that can be evaluated through system-analysis tools such as MATLAB/Simulink.« less
Study on store-space assignment based on logistic AGV in e-commerce goods to person picking pattern
NASA Astrophysics Data System (ADS)
Xu, Lijuan; Zhu, Jie
2017-10-01
This paper studied on the store-space assignment based on logistic AGV in E-commerce goods to person picking pattern, and established the store-space assignment model based on the lowest picking cost, and design for store-space assignment algorithm after the cluster analysis based on similarity coefficient. And then through the example analysis, compared the picking cost between store-space assignment algorithm this paper design and according to item number and storage according to ABC classification allocation, and verified the effectiveness of the design of the store-space assignment algorithm.
NASA Astrophysics Data System (ADS)
Zhang, Yuchao; Gan, Chaoqin; Gou, Kaiyu; Xu, Anni; Ma, Jiamin
2018-01-01
DBA scheme based on Load balance algorithm (LBA) and wavelength recycle mechanism (WRM) for multi-wavelength upstream transmission is proposed in this paper. According to 1 Gbps and 10 Gbps line rates, ONUs are grouped into different VPONs. To facilitate wavelength management, resource pool is proposed to record wavelength state. To realize quantitative analysis, a mathematical model describing metro-access network (MAN) environment is presented. To 10G-EPON upstream, load balance algorithm is designed to ensure load distribution fairness for 10G-OLTs. To 1G-EPON upstream, wavelength recycle mechanism is designed to share remained wavelengths. Finally, the effectiveness of the proposed scheme is demonstrated by simulation and analysis.
Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH.
Volk, Jochen; Herrmann, Torsten; Wüthrich, Kurt
2008-07-01
MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness.
Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall
2011-01-01
One bottleneck in NMR structure determination lies in the laborious and time-consuming process of side-chain resonance and NOE assignments. Compared to the well-studied backbone resonance assignment problem, automated side-chain resonance and NOE assignments are relatively less explored. Most NOE assignment algorithms require nearly complete side-chain resonance assignments from a series of through-bond experiments such as HCCH-TOCSY or HCCCONH. Unfortunately, these TOCSY experiments perform poorly on large proteins. To overcome this deficiency, we present a novel algorithm, called NASCA (NOE Assignment and Side-Chain Assignment), to automate both side-chain resonance and NOE assignments and to perform high-resolution protein structure determination in the absence of any explicit through-bond experiment to facilitate side-chain resonance assignment, such as HCCH-TOCSY. After casting the assignment problem into a Markov Random Field (MRF), NASCA extends and applies combinatorial protein design algorithms to compute optimal assignments that best interpret the NMR data. The MRF captures the contact map information of the protein derived from NOESY spectra, exploits the backbone structural information determined by RDCs, and considers all possible side-chain rotamers. The complexity of the combinatorial search is reduced by using a dead-end elimination (DEE) algorithm, which prunes side-chain resonance assignments that are provably not part of the optimal solution. Then an A* search algorithm is employed to find a set of optimal side-chain resonance assignments that best fit the NMR data. These side-chain resonance assignments are then used to resolve the NOE assignment ambiguity and compute high-resolution protein structures. Tests on five proteins show that NASCA assigns resonances for more than 90% of side-chain protons, and achieves about 80% correct assignments. The final structures computed using the NOE distance restraints assigned by NASCA have backbone RMSD 0.8 – 1.5 Å from the reference structures determined by traditional NMR approaches. PMID:21706248
New optimization model for routing and spectrum assignment with nodes insecurity
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-04-01
By adopting the orthogonal frequency division multiplexing technology, elastic optical networks can provide the flexible and variable bandwidth allocation to each connection request and get higher spectrum utilization. The routing and spectrum assignment problem in elastic optical network is a well-known NP-hard problem. In addition, information security has received worldwide attention. We combine these two problems to investigate the routing and spectrum assignment problem with the guaranteed security in elastic optical network, and establish a new optimization model to minimize the maximum index of the used frequency slots, which is used to determine an optimal routing and spectrum assignment schemes. To solve the model effectively, a hybrid genetic algorithm framework integrating a heuristic algorithm into a genetic algorithm is proposed. The heuristic algorithm is first used to sort the connection requests and then the genetic algorithm is designed to look for an optimal routing and spectrum assignment scheme. In the genetic algorithm, tailor-made crossover, mutation and local search operators are designed. Moreover, simulation experiments are conducted with three heuristic strategies, and the experimental results indicate that the effectiveness of the proposed model and algorithm framework.
Method for Accurately Calibrating a Spectrometer Using Broadband Light
NASA Technical Reports Server (NTRS)
Simmons, Stephen; Youngquist, Robert
2011-01-01
A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.
PhosSA: Fast and accurate phosphorylation site assignment algorithm for mass spectrometry data.
Saeed, Fahad; Pisitkun, Trairak; Hoffert, Jason D; Rashidian, Sara; Wang, Guanghui; Gucek, Marjan; Knepper, Mark A
2013-11-07
Phosphorylation site assignment of high throughput tandem mass spectrometry (LC-MS/MS) data is one of the most common and critical aspects of phosphoproteomics. Correctly assigning phosphorylated residues helps us understand their biological significance. The design of common search algorithms (such as Sequest, Mascot etc.) do not incorporate site assignment; therefore additional algorithms are essential to assign phosphorylation sites for mass spectrometry data. The main contribution of this study is the design and implementation of a linear time and space dynamic programming strategy for phosphorylation site assignment referred to as PhosSA. The proposed algorithm uses summation of peak intensities associated with theoretical spectra as an objective function. Quality control of the assigned sites is achieved using a post-processing redundancy criteria that indicates the signal-to-noise ratio properties of the fragmented spectra. The quality assessment of the algorithm was determined using experimentally generated data sets using synthetic peptides for which phosphorylation sites were known. We report that PhosSA was able to achieve a high degree of accuracy and sensitivity with all the experimentally generated mass spectrometry data sets. The implemented algorithm is shown to be extremely fast and scalable with increasing number of spectra (we report up to 0.5 million spectra/hour on a moderate workstation). The algorithm is designed to accept results from both Sequest and Mascot search engines. An executable is freely available at http://helixweb.nih.gov/ESBL/PhosSA/ for academic research purposes.
An Algorithm for Protein Helix Assignment Using Helix Geometry
Cao, Chen; Xu, Shutan; Wang, Lincong
2015-01-01
Helices are one of the most common and were among the earliest recognized secondary structure elements in proteins. The assignment of helices in a protein underlies the analysis of its structure and function. Though the mathematical expression for a helical curve is simple, no previous assignment programs have used a genuine helical curve as a model for helix assignment. In this paper we present a two-step assignment algorithm. The first step searches for a series of bona fide helical curves each one best fits the coordinates of four successive backbone Cα atoms. The second step uses the best fit helical curves as input to make helix assignment. The application to the protein structures in the PDB (protein data bank) proves that the algorithm is able to assign accurately not only regular α-helix but also 310 and π helices as well as their left-handed versions. One salient feature of the algorithm is that the assigned helices are structurally more uniform than those by the previous programs. The structural uniformity should be useful for protein structure classification and prediction while the accurate assignment of a helix to a particular type underlies structure-function relationship in proteins. PMID:26132394
NASA Technical Reports Server (NTRS)
Mielke, Roland V. (Inventor); Stoughton, John W. (Inventor)
1990-01-01
Computationally complex primitive operations of an algorithm are executed concurrently in a plurality of functional units under the control of an assignment manager. The algorithm is preferably defined as a computationally marked graph contianing data status edges (paths) corresponding to each of the data flow edges. The assignment manager assigns primitive operations to the functional units and monitors completion of the primitive operations to determine data availability using the computational marked graph of the algorithm. All data accessing of the primitive operations is performed by the functional units independently of the assignment manager.
Directly data processing algorithm for multi-wavelength pyrometer (MWP).
Xing, Jian; Peng, Bo; Ma, Zhao; Guo, Xin; Dai, Li; Gu, Weihong; Song, Wenlong
2017-11-27
Data processing of multi-wavelength pyrometer (MWP) is a difficult problem because unknown emissivity. So far some solutions developed generally assumed particular mathematical relations for emissivity versus wavelength or emissivity versus temperature. Due to the deviation between the hypothesis and actual situation, the inversion results can be seriously affected. So directly data processing algorithm of MWP that does not need to assume the spectral emissivity model in advance is main aim of the study. Two new data processing algorithms of MWP, Gradient Projection (GP) algorithm and Internal Penalty Function (IPF) algorithm, each of which does not require to fix emissivity model in advance, are proposed. The novelty core idea is that data processing problem of MWP is transformed into constraint optimization problem, then it can be solved by GP or IPF algorithms. By comparison of simulation results for some typical spectral emissivity models, it is found that IPF algorithm is superior to GP algorithm in terms of accuracy and efficiency. Rocket nozzle temperature experiment results show that true temperature inversion results from IPF algorithm agree well with the theoretical design temperature as well. So the proposed combination IPF algorithm with MWP is expected to be a directly data processing algorithm to clear up the unknown emissivity obstacle for MWP.
GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm
NASA Technical Reports Server (NTRS)
Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.
2003-01-01
The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.
Systematic wavelength selection for improved multivariate spectral analysis
Thomas, Edward V.; Robinson, Mark R.; Haaland, David M.
1995-01-01
Methods and apparatus for determining in a biological material one or more unknown values of at least one known characteristic (e.g. the concentration of an analyte such as glucose in blood or the concentration of one or more blood gas parameters) with a model based on a set of samples with known values of the known characteristics and a multivariate algorithm using several wavelength subsets. The method includes selecting multiple wavelength subsets, from the electromagnetic spectral region appropriate for determining the known characteristic, for use by an algorithm wherein the selection of wavelength subsets improves the model's fitness of the determination for the unknown values of the known characteristic. The selection process utilizes multivariate search methods that select both predictive and synergistic wavelengths within the range of wavelengths utilized. The fitness of the wavelength subsets is determined by the fitness function F=.function.(cost, performance). The method includes the steps of: (1) using one or more applications of a genetic algorithm to produce one or more count spectra, with multiple count spectra then combined to produce a combined count spectrum; (2) smoothing the count spectrum; (3) selecting a threshold count from a count spectrum to select these wavelength subsets which optimize the fitness function; and (4) eliminating a portion of the selected wavelength subsets. The determination of the unknown values can be made: (1) noninvasively and in vivo; (2) invasively and in vivo; or (3) in vitro.
Qualls, Joseph; Russomanno, David J.
2011-01-01
The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081
Highlights of TOMS Version 9 Total Ozone Algorithm
NASA Technical Reports Server (NTRS)
Bhartia, Pawan; Haffner, David
2012-01-01
The fundamental basis of TOMS total ozone algorithm was developed some 45 years ago by Dave and Mateer. It was designed to estimate total ozone from satellite measurements of the backscattered UV radiances at few discrete wavelengths in the Huggins ozone absorption band (310-340 nm). Over the years, as the need for higher accuracy in measuring total ozone from space has increased, several improvements to the basic algorithms have been made. They include: better correction for the effects of aerosols and clouds, an improved method to account for the variation in shape of ozone profiles with season, latitude, and total ozone, and a multi-wavelength correction for remaining profile shape errors. These improvements have made it possible to retrieve total ozone with just 3 spectral channels of moderate spectral resolution (approx. 1 nm) with accuracy comparable to state-of-the-art spectral fitting algorithms like DOAS that require high spectral resolution measurements at large number of wavelengths. One of the deficiencies of the TOMS algorithm has been that it doesn't provide an error estimate. This is a particular problem in high latitudes when the profile shape errors become significant and vary with latitude, season, total ozone, and instrument viewing geometry. The primary objective of the TOMS V9 algorithm is to account for these effects in estimating the error bars. This is done by a straightforward implementation of the Rodgers optimum estimation method using a priori ozone profiles and their error covariances matrices constructed using Aura MLS and ozonesonde data. The algorithm produces a vertical ozone profile that contains 1-2.5 pieces of information (degrees of freedom of signal) depending upon solar zenith angle (SZA). The profile is integrated to obtain the total column. We provide information that shows the altitude range in which the profile is best determined by the measurements. One can use this information in data assimilation and analysis. A side benefit of this algorithm is that it is considerably simpler than the present algorithm that uses a database of 1512 profiles to retrieve total ozone. These profiles are tedious to construct and modify. Though conceptually similar to the SBUV V8 algorithm that was developed about a decade ago, the SBUV and TOMS V9 algorithms differ in detail. The TOMS algorithm uses 3 wavelengths to retrieve the profile while the SBUV algorithm uses 6-9 wavelengths, so TOMS provides less profile information. However both algorithms have comparable total ozone information and TOMS V9 can be easily adapted to use additional wavelengths from instruments like GOME, OMI and OMPS to provide better profile information at smaller SZAs. The other significant difference between the two algorithms is that while the SBUV algorithm has been optimized for deriving monthly zonal means by making an appropriate choice of the a priori error covariance matrix, the TOMS algorithm has been optimized for tracking short-term variability using month and latitude dependent covariance matrices.
NASA Astrophysics Data System (ADS)
Xu, Xiaoqing; Wang, Yawei; Ji, Ying; Xu, Yuanyuan; Xie, Ming
2018-01-01
A new method to extract quantitative phases for each wavelength from three-wavelength in-line phase-shifting interferograms is proposed. Firstly, seven interferograms with positive negative 2π phase shifts are sequentially captured by using the phase-shifting technique. Secondly, six dc-term suppressed intensities can be achieved by the use of the algebraic algorithm. Finally, the wrapped phases at the three wavelengths can be acquired simultaneously from these six interferograms add-subtracting by employing the trigonometric function method. The surface morphology with increased ambiguity-free range at synthetic beat wavelength can be obtained, while maintaining the low noise precision of the single wavelength measurement, by combining this method with three-wavelength phase unwrapping method. We illustrate the principle of this algorithm, and the simulated experiments of the spherical cap and the HeLa cell are conducted to prove our proposed method, respectively.
Superscattering of light optimized by a genetic algorithm
NASA Astrophysics Data System (ADS)
Mirzaei, Ali; Miroshnichenko, Andrey E.; Shadrivov, Ilya V.; Kivshar, Yuri S.
2014-07-01
We analyse scattering of light from multi-layer plasmonic nanowires and employ a genetic algorithm for optimizing the scattering cross section. We apply the mode-expansion method using experimental data for material parameters to demonstrate that our genetic algorithm allows designing realistic core-shell nanostructures with the superscattering effect achieved at any desired wavelength. This approach can be employed for optimizing both superscattering and cloaking at different wavelengths in the visible spectral range.
A New Secondary Structure Assignment Algorithm Using Cα Backbone Fragments
Cao, Chen; Wang, Guishen; Liu, An; Xu, Shutan; Wang, Lincong; Zou, Shuxue
2016-01-01
The assignment of secondary structure elements in proteins is a key step in the analysis of their structures and functions. We have developed an algorithm, SACF (secondary structure assignment based on Cα fragments), for secondary structure element (SSE) assignment based on the alignment of Cα backbone fragments with central poses derived by clustering known SSE fragments. The assignment algorithm consists of three steps: First, the outlier fragments on known SSEs are detected. Next, the remaining fragments are clustered to obtain the central fragments for each cluster. Finally, the central fragments are used as a template to make assignments. Following a large-scale comparison of 11 secondary structure assignment methods, SACF, KAKSI and PROSS are found to have similar agreement with DSSP, while PCASSO agrees with DSSP best. SACF and PCASSO show preference to reducing residues in N and C cap regions, whereas KAKSI, P-SEA and SEGNO tend to add residues to the terminals when DSSP assignment is taken as standard. Moreover, our algorithm is able to assign subtle helices (310-helix, π-helix and left-handed helix) and make uniform assignments, as well as to detect rare SSEs in β-sheets or long helices as outlier fragments from other programs. The structural uniformity should be useful for protein structure classification and prediction, while outlier fragments underlie the structure–function relationship. PMID:26978354
Single shot multi-wavelength phase retrieval with coherent modulation imaging.
Dong, Xue; Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang
2018-04-15
A single shot multi-wavelength phase retrieval method is proposed by combining common coherent modulation imaging (CMI) and a low rank mixed-state algorithm together. A radiation beam consisting of multi-wavelength is illuminated on the sample to be observed, and the exiting field is incident on a random phase plate to form speckle patterns, which is the incoherent superposition of diffraction patterns of each wavelength. The exiting complex amplitude of the sample including both the modulus and phase of each wavelength can be reconstructed simultaneously from the recorded diffraction intensity using a low rank mixed-state algorithm. The feasibility of this proposed method was verified with visible light experimentally. This proposed method not only makes CMI realizable with partially coherent illumination but also can extend its application to various traditionally unrelated fields, where several wavelengths should be considered simultaneously.
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
Cheng, Jun-Hu; Sun, Da-Wen; Pu, Hongbin
2016-04-15
The potential use of feature wavelengths for predicting drip loss in grass carp fish, as affected by being frozen at -20°C for 24 h and thawed at 4°C for 1, 2, 4, and 6 days, was investigated. Hyperspectral images of frozen-thawed fish were obtained and their corresponding spectra were extracted. Least-squares support vector machine and multiple linear regression (MLR) models were established using five key wavelengths, selected by combining a genetic algorithm and successive projections algorithm, and this showed satisfactory performance in drip loss prediction. The MLR model with a determination coefficient of prediction (R(2)P) of 0.9258, and lower root mean square error estimated by a prediction (RMSEP) of 1.12%, was applied to transfer each pixel of the image and generate the distribution maps of exudation changes. The results confirmed that it is feasible to identify the feature wavelengths using variable selection methods and chemometric analysis for developing on-line multispectral imaging. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Markov Random Field Framework for Protein Side-Chain Resonance Assignment
NASA Astrophysics Data System (ADS)
Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall
Nuclear magnetic resonance (NMR) spectroscopy plays a critical role in structural genomics, and serves as a primary tool for determining protein structures, dynamics and interactions in physiologically-relevant solution conditions. The current speed of protein structure determination via NMR is limited by the lengthy time required in resonance assignment, which maps spectral peaks to specific atoms and residues in the primary sequence. Although numerous algorithms have been developed to address the backbone resonance assignment problem [68,2,10,37,14,64,1,31,60], little work has been done to automate side-chain resonance assignment [43, 48, 5]. Most previous attempts in assigning side-chain resonances depend on a set of NMR experiments that record through-bond interactions with side-chain protons for each residue. Unfortunately, these NMR experiments have low sensitivity and limited performance on large proteins, which makes it difficult to obtain enough side-chain resonance assignments. On the other hand, it is essential to obtain almost all of the side-chain resonance assignments as a prerequisite for high-resolution structure determination. To overcome this deficiency, we present a novel side-chain resonance assignment algorithm based on alternative NMR experiments measuring through-space interactions between protons in the protein, which also provide crucial distance restraints and are normally required in high-resolution structure determination. We cast the side-chain resonance assignment problem into a Markov Random Field (MRF) framework, and extend and apply combinatorial protein design algorithms to compute the optimal solution that best interprets the NMR data. Our MRF framework captures the contact map information of the protein derived from NMR spectra, and exploits the structural information available from the backbone conformations determined by orientational restraints and a set of discretized side-chain conformations (i.e., rotamers). A Hausdorff-based computation is employed in the scoring function to evaluate the probability of side-chain resonance assignments to generate the observed NMR spectra. The complexity of the assignment problem is first reduced by using a dead-end elimination (DEE) algorithm, which prunes side-chain resonance assignments that are provably not part of the optimal solution. Then an A* search algorithm is used to find a set of optimal side-chain resonance assignments that best fit the NMR data. We have tested our algorithm on NMR data for five proteins, including the FF Domain 2 of human transcription elongation factor CA150 (FF2), the B1 domain of Protein G (GB1), human ubiquitin, the ubiquitin-binding zinc finger domain of the human Y-family DNA polymerase Eta (pol η UBZ), and the human Set2-Rpb1 interacting domain (hSRI). Our algorithm assigns resonances for more than 90% of the protons in the proteins, and achieves about 80% correct side-chain resonance assignments. The final structures computed using distance restraints resulting from the set of assigned side-chain resonances have backbone RMSD 0.5 - 1.4 Å and all-heavy-atom RMSD 1.0 - 2.2 Å from the reference structures that were determined by X-ray crystallography or traditional NMR approaches. These results demonstrate that our algorithm can be successfully applied to automate side-chain resonance assignment and high-quality protein structure determination. Since our algorithm does not require any specific NMR experiments for measuring the through-bond interactions with side-chain protons, it can save a significant amount of both experimental cost and spectrometer time, and hence accelerate the NMR structure determination process.
Yang, Yu; Fritzsching, Keith J; Hong, Mei
2013-11-01
A multi-objective genetic algorithm is introduced to predict the assignment of protein solid-state NMR (SSNMR) spectra with partial resonance overlap and missing peaks due to broad linewidths, molecular motion, and low sensitivity. This non-dominated sorting genetic algorithm II (NSGA-II) aims to identify all possible assignments that are consistent with the spectra and to compare the relative merit of these assignments. Our approach is modeled after the recently introduced Monte-Carlo simulated-annealing (MC/SA) protocol, with the key difference that NSGA-II simultaneously optimizes multiple assignment objectives instead of searching for possible assignments based on a single composite score. The multiple objectives include maximizing the number of consistently assigned peaks between multiple spectra ("good connections"), maximizing the number of used peaks, minimizing the number of inconsistently assigned peaks between spectra ("bad connections"), and minimizing the number of assigned peaks that have no matching peaks in the other spectra ("edges"). Using six SSNMR protein chemical shift datasets with varying levels of imperfection that was introduced by peak deletion, random chemical shift changes, and manual peak picking of spectra with moderately broad linewidths, we show that the NSGA-II algorithm produces a large number of valid and good assignments rapidly. For high-quality chemical shift peak lists, NSGA-II and MC/SA perform similarly well. However, when the peak lists contain many missing peaks that are uncorrelated between different spectra and have chemical shift deviations between spectra, the modified NSGA-II produces a larger number of valid solutions than MC/SA, and is more effective at distinguishing good from mediocre assignments by avoiding the hazard of suboptimal weighting factors for the various objectives. These two advantages, namely diversity and better evaluation, lead to a higher probability of predicting the correct assignment for a larger number of residues. On the other hand, when there are multiple equally good assignments that are significantly different from each other, the modified NSGA-II is less efficient than MC/SA in finding all the solutions. This problem is solved by a combined NSGA-II/MC algorithm, which appears to have the advantages of both NSGA-II and MC/SA. This combination algorithm is robust for the three most difficult chemical shift datasets examined here and is expected to give the highest-quality de novo assignment of challenging protein NMR spectra.
Lee, Jie Hyun; Park, Heuk; Kang, Sae-Kyoung; Lee, Joon Ki; Chung, Hwan Seok
2015-11-30
In this study, we propose and experimentally demonstrate a wavelength domain rogue-free ONU based on wavelength-pairing of downstream and upstream signals for time/wavelength division-multiplexed optical access networks. The wavelength-pairing tunable filter is aligned to the upstream wavelength channel by aligning it to one of the downstream wavelength channels. Wavelength-pairing is implemented with a compact and cyclic Si-AWG integrated with a Ge-PD. The pairing filter covered four 100 GHz-spaced wavelength channels. The feasibility of the wavelength domain rogue-free operation is investigated by emulating malfunction of the misaligned laser. The wavelength-pairing tunable filter based on the Si-AWG blocks the upstream signal in the non-assigned wavelength channel before data collision with other ONUs.
Evaluation of Dynamic Channel and Power Assignment for Cognitive Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syed A. Ahmad; Umesh Shukla; Ryan E. Irwin
2011-03-01
In this paper, we develop a unifying optimization formulation to describe the Dynamic Channel and Power Assignment (DCPA) problem and evaluation method for comparing DCPA algorithms. DCPA refers to the allocation of transmit power and frequency channels to links in a cognitive network so as to maximize the total number of feasible links while minimizing the aggregate transmit power. We apply our evaluation method to five algorithms representative of DCPA used in literature. This comparison illustrates the tradeoffs between control modes (centralized versus distributed) and channel/power assignment techniques. We estimate the complexity of each algorithm. Through simulations, we evaluate themore » effectiveness of the algorithms in achieving feasible link allocations in the network, as well as their power efficiency. Our results indicate that, when few channels are available, the effectiveness of all algorithms is comparable and thus the one with smallest complexity should be selected. The Least Interfering Channel and Iterative Power Assignment (LICIPA) algorithm does not require cross-link gain information, has the overall lowest run time, and highest feasibility ratio of all the distributed algorithms; however, this comes at a cost of higher average power per link.« less
NASA Astrophysics Data System (ADS)
Dolan, B.; Rutledge, S. A.; Barnum, J. I.; Matsui, T.; Tao, W. K.; Iguchi, T.
2017-12-01
POLarimetric Radar Retrieval and Instrument Simulator (POLARRIS) is a framework that has been developed to simulate radar observations from cloud resolving model (CRM) output and subject model data and observations to the same retrievals, analysis and visualization. This framework not only enables validation of bulk microphysical model simulated properties, but also offers an opportunity to study the uncertainties associated with retrievals such as hydrometeor classification (HID). For the CSU HID, membership beta functions (MBFs) are built using a set of simulations with realistic microphysical assumptions about axis ratio, density, canting angles, size distributions for each of ten hydrometeor species. These assumptions are tested using POLARRIS to understand their influence on the resulting simulated polarimetric data and final HID classification. Several of these parameters (density, size distributions) are set by the model microphysics, and therefore the specific assumptions of axis ratio and canting angle are carefully studied. Through these sensitivity studies, we hope to be able to provide uncertainties in retrieved polarimetric variables and HID as applied to CRM output. HID retrievals assign a classification to each point by determining the highest score, thereby identifying the dominant hydrometeor type within a volume. However, in nature, there is rarely just one a single hydrometeor type at a particular point. Models allow for mixing ratios of different hydrometeors within a grid point. We use the mixing ratios from CRM output in concert with the HID scores and classifications to understand how the HID algorithm can provide information about mixtures within a volume, as well as calculate a confidence in the classifications. We leverage the POLARRIS framework to additionally probe radar wavelength differences toward the possibility of a multi-wavelength HID which could utilize the strengths of different wavelengths to improve HID classifications. With these uncertainties and algorithm improvements, cases of convection are studied in a continental (Oklahoma) and maritime (Darwin, Australia) regime. Observations from C-band polarimetric data in both locations are compared to CRM simulations from NU-WRF using the POLARRIS framework.
Dynamic traffic assignment : genetic algorithms approach
DOT National Transportation Integrated Search
1997-01-01
Real-time route guidance is a promising approach to alleviating congestion on the nations highways. A dynamic traffic assignment model is central to the development of guidance strategies. The artificial intelligence technique of genetic algorithm...
NASA Astrophysics Data System (ADS)
Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun
2018-07-01
Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.
Evaluation of Automatically Assigned Job-Specific Interview Modules
Friesen, Melissa C.; Lan, Qing; Ge, Calvin; Locke, Sarah J.; Hosgood, Dean; Fritschi, Lin; Sadkowsky, Troy; Chen, Yu-Cheng; Wei, Hu; Xu, Jun; Lam, Tai Hing; Kwong, Yok Lam; Chen, Kexin; Xu, Caigang; Su, Yu-Chieh; Chiu, Brian C. H.; Ip, Kai Ming Dennis; Purdue, Mark P.; Bassig, Bryan A.; Rothman, Nat; Vermeulen, Roel
2016-01-01
Objective: In community-based epidemiological studies, job- and industry-specific ‘modules’ are often used to systematically obtain details about the subject’s work tasks. The module assignment is often made by the interviewer, who may have insufficient occupational hygiene knowledge to assign the correct module. We evaluated, in the context of a case–control study of lymphoid neoplasms in Asia (‘AsiaLymph’), the performance of an algorithm that provided automatic, real-time module assignment during a computer-assisted personal interview. Methods: AsiaLymph’s occupational component began with a lifetime occupational history questionnaire with free-text responses and three solvent exposure screening questions. To assign each job to one of 23 study-specific modules, an algorithm automatically searched the free-text responses to the questions ‘job title’ and ‘product made or services provided by employer’ using a list of module-specific keywords, comprising over 5800 keywords in English, Traditional and Simplified Chinese. Hierarchical decision rules were used when the keyword match triggered multiple modules. If no keyword match was identified, a generic solvent module was assigned if the subject responded ‘yes’ to any of the three solvent screening questions. If these question responses were all ‘no’, a work location module was assigned, which redirected the subject to the farming, teaching, health professional, solvent, or industry solvent modules or ended the questions for that job, depending on the location response. We conducted a reliability assessment that compared the algorithm-assigned modules to consensus module assignments made by two industrial hygienists for a subset of 1251 (of 11409) jobs selected using a stratified random selection procedure using module-specific strata. Discordant assignments between the algorithm and consensus assignments (483 jobs) were qualitatively reviewed by the hygienists to evaluate the potential information lost from missed questions with using the algorithm-assigned module (none, low, medium, high). Results: The most frequently assigned modules were the work location (33%), solvent (20%), farming and food industry (19%), and dry cleaning and textile industry (6.4%) modules. In the reliability subset, the algorithm assignment had an exact match to the expert consensus-assigned module for 722 (57.7%) of the 1251 jobs. Overall, adjusted for the proportion of jobs in each stratum, we estimated that 86% of the algorithm-assigned modules would result in no information loss, 2% would have low information loss, and 12% would have medium to high information loss. Medium to high information loss occurred for <10% of the jobs assigned the generic solvent module and for 21, 32, and 31% of the jobs assigned the work location module with location responses of ‘someplace else’, ‘factory’, and ‘don’t know’, respectively. Other work location responses had ≤8% with medium to high information loss because of redirections to other modules. Medium to high information loss occurred more frequently when a job description matched with multiple keywords pointing to different modules (29–69%, depending on the triggered assignment rule). Conclusions: These evaluations demonstrated that automatically assigned modules can reliably reproduce an expert’s module assignment without the direct involvement of an industrial hygienist or interviewer. The feasibility of adapting this framework to other studies will be language- and exposure-specific. PMID:27250109
Evaluation of Automatically Assigned Job-Specific Interview Modules.
Friesen, Melissa C; Lan, Qing; Ge, Calvin; Locke, Sarah J; Hosgood, Dean; Fritschi, Lin; Sadkowsky, Troy; Chen, Yu-Cheng; Wei, Hu; Xu, Jun; Lam, Tai Hing; Kwong, Yok Lam; Chen, Kexin; Xu, Caigang; Su, Yu-Chieh; Chiu, Brian C H; Ip, Kai Ming Dennis; Purdue, Mark P; Bassig, Bryan A; Rothman, Nat; Vermeulen, Roel
2016-08-01
In community-based epidemiological studies, job- and industry-specific 'modules' are often used to systematically obtain details about the subject's work tasks. The module assignment is often made by the interviewer, who may have insufficient occupational hygiene knowledge to assign the correct module. We evaluated, in the context of a case-control study of lymphoid neoplasms in Asia ('AsiaLymph'), the performance of an algorithm that provided automatic, real-time module assignment during a computer-assisted personal interview. AsiaLymph's occupational component began with a lifetime occupational history questionnaire with free-text responses and three solvent exposure screening questions. To assign each job to one of 23 study-specific modules, an algorithm automatically searched the free-text responses to the questions 'job title' and 'product made or services provided by employer' using a list of module-specific keywords, comprising over 5800 keywords in English, Traditional and Simplified Chinese. Hierarchical decision rules were used when the keyword match triggered multiple modules. If no keyword match was identified, a generic solvent module was assigned if the subject responded 'yes' to any of the three solvent screening questions. If these question responses were all 'no', a work location module was assigned, which redirected the subject to the farming, teaching, health professional, solvent, or industry solvent modules or ended the questions for that job, depending on the location response. We conducted a reliability assessment that compared the algorithm-assigned modules to consensus module assignments made by two industrial hygienists for a subset of 1251 (of 11409) jobs selected using a stratified random selection procedure using module-specific strata. Discordant assignments between the algorithm and consensus assignments (483 jobs) were qualitatively reviewed by the hygienists to evaluate the potential information lost from missed questions with using the algorithm-assigned module (none, low, medium, high). The most frequently assigned modules were the work location (33%), solvent (20%), farming and food industry (19%), and dry cleaning and textile industry (6.4%) modules. In the reliability subset, the algorithm assignment had an exact match to the expert consensus-assigned module for 722 (57.7%) of the 1251 jobs. Overall, adjusted for the proportion of jobs in each stratum, we estimated that 86% of the algorithm-assigned modules would result in no information loss, 2% would have low information loss, and 12% would have medium to high information loss. Medium to high information loss occurred for <10% of the jobs assigned the generic solvent module and for 21, 32, and 31% of the jobs assigned the work location module with location responses of 'someplace else', 'factory', and 'don't know', respectively. Other work location responses had ≤8% with medium to high information loss because of redirections to other modules. Medium to high information loss occurred more frequently when a job description matched with multiple keywords pointing to different modules (29-69%, depending on the triggered assignment rule). These evaluations demonstrated that automatically assigned modules can reliably reproduce an expert's module assignment without the direct involvement of an industrial hygienist or interviewer. The feasibility of adapting this framework to other studies will be language- and exposure-specific. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2016.
Multigranular integrated services optical network
NASA Astrophysics Data System (ADS)
Yu, Oliver; Yin, Leping; Xu, Huan; Liao, Ming
2006-12-01
Based on all-optical switches without requiring fiber delay lines and optical-electrical-optical interfaces, the multigranular optical switching (MGOS) network integrates three transport services via unified core control to efficiently support bursty and stream traffic of subwavelength to multiwavelength bandwidth. Adaptive robust optical burst switching (AR-OBS) aggregates subwavelength burst traffic into asynchronous light-rate bursts, transported via slotted-time light paths established by fast two-way reservation with robust blocking recovery control. Multiwavelength optical switching (MW-OS) decomposes multiwavelength stream traffic into a group of timing-related light-rate streams, transported via a light-path group to meet end-to-end delay-variation requirements. Optical circuit switching (OCS) simply converts wavelength stream traffic from an electrical-rate into a light-rate stream. The MGOS network employs decoupled routing, wavelength, and time-slot assignment (RWTA) and novel group routing and wavelength assignment (GRWA) to select slotted-time light paths and light-path groups, respectively. The selected resources are reserved by the unified multigranular robust fast optical reservation protocol (MG-RFORP). Simulation results show that elastic traffic is efficiently supported via AR-OBS in terms of loss rate and wavelength utilization, while connection-oriented wavelength traffic is efficiently supported via wavelength-routed OCS in terms of connection blocking and wavelength utilization. The GRWA-tuning result for MW-OS is also shown.
NASA Astrophysics Data System (ADS)
Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei
2018-01-01
The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.
Use of Dual-wavelength Radar for Snow Parameter Estimates
NASA Technical Reports Server (NTRS)
Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew
2005-01-01
Use of dual-wavelength radar, with properly chosen wavelengths, will significantly lessen the ambiguities in the retrieval of microphysical properties of hydrometeors. In this paper, a dual-wavelength algorithm is described to estimate the characteristic parameters of the snow size distributions. An analysis of the computational results, made at X and Ka bands (T-39 airborne radar) and at S and X bands (CP-2 ground-based radar), indicates that valid estimates of the median volume diameter of snow particles, D(sub 0), should be possible if one of the two wavelengths of the radar operates in the non-Rayleigh scattering region. However, the accuracy may be affected to some extent if the shape factors of the Gamma function used for describing the particle distribution are chosen far from the true values or if cloud water attenuation is significant. To examine the validity and accuracy of the dual-wavelength radar algorithms, the algorithms are applied to the data taken from the Convective and Precipitation-Electrification Experiment (CaPE) in 1991, in which the dual-wavelength airborne radar was coordinated with in situ aircraft particle observations and ground-based radar measurements. Having carefully co-registered the data obtained from the different platforms, the airborne radar-derived size distributions are then compared with the in-situ measurements and ground-based radar. Good agreement is found for these comparisons despite the uncertainties resulting from mismatches of the sample volumes among the different sensors as well as spatial and temporal offsets.
Sensitivity of blackbody effective emissivity to wavelength and temperature: By genetic algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ejigu, E. K.; Liedberg, H. G.
A variable-temperature blackbody (VTBB) is used to calibrate an infrared radiation thermometer (pyrometer). The effective emissivity (ε{sub eff}) of a VTBB is dependent on temperature and wavelength other than the geometry of the VTBB. In the calibration process the effective emissivity is often assumed to be constant within the wavelength and temperature range. There are practical situations where the sensitivity of the effective emissivity needs to be known and correction has to be applied. We present a method using a genetic algorithm to investigate the sensitivity of the effective emissivity to wavelength and temperature variation. Two matlab® programs are generated:more » the first to model the radiance temperature calculation and the second to connect the model to the genetic algorithm optimization toolbox. The effective emissivity parameter is taken as a chromosome and optimized at each wavelength and temperature point. The difference between the contact temperature (reading from a platinum resistance thermometer or liquid in glass thermometer) and radiance temperature (calculated from the ε{sub eff} values) is used as an objective function where merit values are calculated and best fit ε{sub eff} values selected. The best fit ε{sub eff} values obtained as a solution show how sensitive they are to temperature and wavelength parameter variation. Uncertainty components that arise from wavelength and temperature variation are determined based on the sensitivity analysis. Numerical examples are considered for illustration.« less
Application of a fast sorting algorithm to the assignment of mass spectrometric cross-linking data.
Petrotchenko, Evgeniy V; Borchers, Christoph H
2014-09-01
Cross-linking combined with MS involves enzymatic digestion of cross-linked proteins and identifying cross-linked peptides. Assignment of cross-linked peptide masses requires a search of all possible binary combinations of peptides from the cross-linked proteins' sequences, which becomes impractical with increasing complexity of the protein system and/or if digestion enzyme specificity is relaxed. Here, we describe the application of a fast sorting algorithm to search large sequence databases for cross-linked peptide assignments based on mass. This same algorithm has been used previously for assigning disulfide-bridged peptides (Choi et al., ), but has not previously been applied to cross-linking studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
AUTOBA: automation of backbone assignment from HN(C)N suite of experiments.
Borkar, Aditi; Kumar, Dinesh; Hosur, Ramakrishna V
2011-07-01
Development of efficient strategies and automation represent important milestones of progress in rapid structure determination efforts in proteomics research. In this context, we present here an efficient algorithm named as AUTOBA (Automatic Backbone Assignment) designed to automate the assignment protocol based on HN(C)N suite of experiments. Depending upon the spectral dispersion, the user can record 2D or 3D versions of the experiments for assignment. The algorithm uses as inputs: (i) protein primary sequence and (ii) peak-lists from user defined HN(C)N suite of experiments. In the end, one gets H(N), (15)N, C(α) and C' assignments (in common BMRB format) for the individual residues along the polypeptide chain. The success of the algorithm has been demonstrated, not only with experimental spectra recorded on two small globular proteins: ubiquitin (76 aa) and M-crystallin (85 aa), but also with simulated spectra of 27 other proteins using assignment data from the BMRB.
Analysis of labor employment assessment on production machine to minimize time production
NASA Astrophysics Data System (ADS)
Hernawati, Tri; Suliawati; Sari Gumay, Vita
2018-03-01
Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.
Simulated annealing algorithm for solving chambering student-case assignment problem
NASA Astrophysics Data System (ADS)
Ghazali, Saadiah; Abdul-Rahman, Syariza
2015-12-01
The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.
More reliable protein NMR peak assignment via improved 2-interval scheduling.
Chen, Zhi-Zhong; Lin, Guohui; Rizzi, Romeo; Wen, Jianjun; Xu, Dong; Xu, Ying; Jiang, Tao
2005-03-01
Protein NMR peak assignment refers to the process of assigning a group of "spin systems" obtained experimentally to a protein sequence of amino acids. The automation of this process is still an unsolved and challenging problem in NMR protein structure determination. Recently, protein NMR peak assignment has been formulated as an interval scheduling problem (ISP), where a protein sequence P of amino acids is viewed as a discrete time interval I (the amino acids on P one-to-one correspond to the time units of I), each subset S of spin systems that are known to originate from consecutive amino acids from P is viewed as a "job" j(s), the preference of assigning S to a subsequence P of consecutive amino acids on P is viewed as the profit of executing job j(s) in the subinterval of I corresponding to P, and the goal is to maximize the total profit of executing the jobs (on a single machine) during I. The interval scheduling problem is max SNP-hard in general; but in the real practice of protein NMR peak assignment, each job j(s) usually requires at most 10 consecutive time units, and typically the jobs that require one or two consecutive time units are the most difficult to assign/schedule. In order to solve these most difficult assignments, we present an efficient 13/7-approximation algorithm for the special case of the interval scheduling problem where each job takes one or two consecutive time units. Combining this algorithm with a greedy filtering strategy for handling long jobs (i.e., jobs that need more than two consecutive time units), we obtain a new efficient heuristic for protein NMR peak assignment. Our experimental study shows that the new heuristic produces the best peak assignment in most of the cases, compared with the NMR peak assignment algorithms in the recent literature. The above algorithm is also the first approximation algorithm for a nontrivial case of the well-known interval scheduling problem that breaks the ratio 2 barrier.
Adaptive protection algorithm and system
Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA
2009-04-28
An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.
A Genetic-Based Scheduling Algorithm to Minimize the Makespan of the Grid Applications
NASA Astrophysics Data System (ADS)
Entezari-Maleki, Reza; Movaghar, Ali
Task scheduling algorithms in grid environments strive to maximize the overall throughput of the grid. In order to maximize the throughput of the grid environments, the makespan of the grid tasks should be minimized. In this paper, a new task scheduling algorithm is proposed to assign tasks to the grid resources with goal of minimizing the total makespan of the tasks. The algorithm uses the genetic approach to find the suitable assignment within grid resources. The experimental results obtained from applying the proposed algorithm to schedule independent tasks within grid environments demonstrate the applicability of the algorithm in achieving schedules with comparatively lower makespan in comparison with other well-known scheduling algorithms such as, Min-min, Max-min, RASA and Sufferage algorithms.
Asteroid detection using a single multi-wavelength CCD scan
NASA Astrophysics Data System (ADS)
Melton, Jonathan
2016-09-01
Asteroid detection is a topic of great interest due to the possibility of diverting possibly dangerous asteroids or mining potentially lucrative ones. Currently, asteroid detection is generally performed by taking multiple images of the same patch of sky separated by 10-15 minutes, then subtracting the images to find movement. However, this is time consuming because of the need to revisit the same area multiple times per night. This paper describes an algorithm that can detect asteroids using a single CCD camera scan, thus cutting down on the time and cost of an asteroid survey. The algorithm is based on the fact that some telescopes scan the sky at multiple wavelengths with a small time separation between the wavelength components. As a result, an object moving with sufficient speed will appear in different places in different wavelength components of the same image. Using image processing techniques we detect the centroids of points of light in the first component and compare these positions to the centroids in the other components using a nearest neighbor algorithm. The algorithm was used on a test set of 49 images obtained from the Sloan telescope in New Mexico and found 100% of known asteroids with only 3 false positives. This algorithm has the advantage of decreasing the amount of time required to perform an asteroid scan, thus allowing more sky to be scanned in the same amount of time or freeing a telescope for other pursuits.
Integrated consensus-based frameworks for unmanned vehicle routing and targeting assignment
NASA Astrophysics Data System (ADS)
Barnawi, Waleed T.
Unmanned aerial vehicles (UAVs) are increasingly deployed in complex and dynamic environments to perform multiple tasks cooperatively with other UAVs that contribute to overarching mission effectiveness. Studies by the Department of Defense (DoD) indicate future operations may include anti-access/area-denial (A2AD) environments which limit human teleoperator decision-making and control. This research addresses the problem of decentralized vehicle re-routing and task reassignments through consensus-based UAV decision-making. An Integrated Consensus-Based Framework (ICF) is formulated as a solution to the combined single task assignment problem and vehicle routing problem. The multiple assignment and vehicle routing problem is solved with the Integrated Consensus-Based Bundle Framework (ICBF). The frameworks are hierarchically decomposed into two levels. The bottom layer utilizes the renowned Dijkstra's Algorithm. The top layer addresses task assignment with two methods. The single assignment approach is called the Caravan Auction Algorithm (CarA) Algorithm. This technique extends the Consensus-Based Auction Algorithm (CBAA) to provide awareness for task completion by agents and adopt abandoned tasks. The multiple assignment approach called the Caravan Auction Bundle Algorithm (CarAB) extends the Consensus-Based Bundle Algorithm (CBBA) by providing awareness for lost resources, prioritizing remaining tasks, and adopting abandoned tasks. Research questions are investigated regarding the novelty and performance of the proposed frameworks. Conclusions regarding the research questions will be provided through hypothesis testing. Monte Carlo simulations will provide evidence to support conclusions regarding the research hypotheses for the proposed frameworks. The approach provided in this research addresses current and future military operations for unmanned aerial vehicles. However, the general framework implied by the proposed research is adaptable to any unmanned vehicle. Civil applications that involve missions where human observability would be limited could benefit from the independent UAV task assignment, such as exploration and fire surveillance are also notable uses for this approach.
Deducing chemical structure from crystallographically determined atomic coordinates
Bruno, Ian J.; Shields, Gregory P.; Taylor, Robin
2011-01-01
An improved algorithm has been developed for assigning chemical structures to incoming entries to the Cambridge Structural Database, using only the information available in the deposited CIF. Steps in the algorithm include detection of bonds, selection of polymer unit, resolution of disorder, and assignment of bond types and formal charges. The chief difficulty is posed by the large number of metallo-organic crystal structures that must be processed, given our aspiration that assigned chemical structures should accurately reflect properties such as the oxidation states of metals and redox-active ligands, metal coordination numbers and hapticities, and the aromaticity or otherwise of metal ligands. Other complications arise from disorder, especially when it is symmetry imposed or modelled with the SQUEEZE algorithm. Each assigned structure is accompanied by an estimate of reliability and, where necessary, diagnostic information indicating probable points of error. Although the algorithm was written to aid building of the Cambridge Structural Database, it has the potential to develop into a general-purpose tool for adding chemical information to newly determined crystal structures. PMID:21775812
Autonomous Guidance Strategy for Spacecraft Formations and Reconfiguration Maneuvers
NASA Astrophysics Data System (ADS)
Wahl, Theodore P.
A guidance strategy for autonomous spacecraft formation reconfiguration maneuvers is presented. The guidance strategy is presented as an algorithm that solves the linked assignment and delivery problems. The assignment problem is the task of assigning the member spacecraft of the formation to their new positions in the desired formation geometry. The guidance algorithm uses an auction process (also called an "auction algorithm''), presented in the dissertation, to solve the assignment problem. The auction uses the estimated maneuver and time of flight costs between the spacecraft and targets to create assignments which minimize a specific "expense'' function for the formation. The delivery problem is the task of delivering the spacecraft to their assigned positions, and it is addressed through one of two guidance schemes described in this work. The first is a delivery scheme based on artificial potential function (APF) guidance. APF guidance uses the relative distances between the spacecraft, targets, and any obstacles to design maneuvers based on gradients of potential fields. The second delivery scheme is based on model predictive control (MPC); this method uses a model of the system dynamics to plan a series of maneuvers designed to minimize a unique cost function. The guidance algorithm uses an analytic linearized approximation of the relative orbital dynamics, the Yamanaka-Ankersen state transition matrix, in the auction process and in both delivery methods. The proposed guidance strategy is successful, in simulations, in autonomously assigning the members of the formation to new positions and in delivering the spacecraft to these new positions safely using both delivery methods. This guidance algorithm can serve as the basis for future autonomous guidance strategies for spacecraft formation missions.
A novel dynamic wavelength bandwidth allocation scheme over OFDMA PONs
NASA Astrophysics Data System (ADS)
Yan, Bo; Guo, Wei; Jin, Yaohui; Hu, Weisheng
2011-12-01
With rapid growth of Internet applications, supporting differentiated service and enlarging system capacity have been new tasks for next generation access system. In recent years, research in OFDMA Passive Optical Networks (PON) has experienced extraordinary development as for its large capacity and flexibility in scheduling. Although much work has been done to solve hardware layer obstacles for OFDMA PON, scheduling algorithm on OFDMA PON system is still under primary discussion. In order to support QoS service on OFDMA PON system, a novel dynamic wavelength bandwidth allocation (DWBA) algorithm is proposed in this paper. Per-stream QoS service is supported in this algorithm. Through simulation, we proved our bandwidth allocation algorithm performs better in bandwidth utilization and differentiate service support.
Integer Linear Programming for Constrained Multi-Aspect Committee Review Assignment
Karimzadehgan, Maryam; Zhai, ChengXiang
2011-01-01
Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. A general setup of the review assignment problem involves assigning a set of reviewers on a committee to a set of documents to be reviewed under the constraint of review quota so that the reviewers assigned to a document can collectively cover multiple topic aspects of the document. No previous work has addressed such a setup of committee review assignments while also considering matching multiple aspects of topics and expertise. In this paper, we tackle the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using a multi-aspect review assignment test set constructed using ACM SIGIR publications shows that the proposed algorithm is effective and efficient for committee review assignments based on multi-aspect expertise matching. PMID:22711970
The airport gate assignment problem: a survey.
Bouras, Abdelghani; Ghaleb, Mageed A; Suryahatmaja, Umar S; Salem, Ahmed M
2014-01-01
The airport gate assignment problem (AGAP) is one of the most important problems operations managers face daily. Many researches have been done to solve this problem and tackle its complexity. The objective of the task is assigning each flight (aircraft) to an available gate while maximizing both conveniences to passengers and the operational efficiency of airport. This objective requires a solution that provides the ability to change and update the gate assignment data on a real time basis. In this paper, we survey the state of the art of these problems and the various methods to obtain the solution. Our survey covers both theoretical and real AGAP with the description of mathematical formulations and resolution methods such as exact algorithms, heuristic algorithms, and metaheuristic algorithms. We also provide a research trend that can inspire researchers about new problems in this area.
The Airport Gate Assignment Problem: A Survey
Ghaleb, Mageed A.; Salem, Ahmed M.
2014-01-01
The airport gate assignment problem (AGAP) is one of the most important problems operations managers face daily. Many researches have been done to solve this problem and tackle its complexity. The objective of the task is assigning each flight (aircraft) to an available gate while maximizing both conveniences to passengers and the operational efficiency of airport. This objective requires a solution that provides the ability to change and update the gate assignment data on a real time basis. In this paper, we survey the state of the art of these problems and the various methods to obtain the solution. Our survey covers both theoretical and real AGAP with the description of mathematical formulations and resolution methods such as exact algorithms, heuristic algorithms, and metaheuristic algorithms. We also provide a research trend that can inspire researchers about new problems in this area. PMID:25506074
Distributed resource allocation under communication constraints
NASA Astrophysics Data System (ADS)
Dodin, Pierre; Nimier, Vincent
2001-03-01
This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.
NASA Astrophysics Data System (ADS)
Millard, R. C.; Seaver, G.
1990-12-01
A 27-term index of refraction algorithm for pure and sea waters has been developed using four experimental data sets of differing accuracies. They cover the range 500-700 nm in wavelength, 0-30°C in temperature, 0-40 psu in salinity, and 0-11,000 db in pressure. The index of refraction algorithm has an accuracy that varies from 0.4 ppm for pure water at atmospheric pressure to 80 ppm at high pressures, but preserves the accuracy of each original data set. This algorithm is a significant improvement over existing descriptions as it is in analytical form with a better and more carefully defined accuracy. A salinometer algorithm with the same uncertainty has been created by numerically inverting the index algorithm using the Newton-Raphson method. The 27-term index algorithm was used to generate a pseudo-data set at the sodium D wavelength (589.26 nm) from which a 6-term densitometer algorithm was constructed. The densitometer algorithm also produces salinity as an intermediate step in the salinity inversion. The densitometer residuals have a standard deviation of 0.049 kg m -3 which is not accurate enough for most oceanographic applications. However, the densitometer algorithm was used to explore the sensitivity of density from this technique to temperature and pressure uncertainties. To achieve a deep ocean densitometer of 0.001 kg m -3 accuracy would require the index of refraction to have an accuracy of 0.3 ppm, the temperature an accuracy of 0.01°C and the pressure 1 db. Our assessment of the currently available index of refraction measurements finds that only the data for fresh water at atmospheric pressure produce an algorithm satisfactory for oceanographic use (density to 0.4 ppm). The data base for the algorithm at higher pressures and various salinities requires an order of magnitude or better improvement in index measurement accuracy before the resultant density accuracy will be comparable to the currently available oceanographic algorithm.
A New Algorithm for Detecting Cloud Height using OMPS/LP Measurements
NASA Technical Reports Server (NTRS)
Chen, Zhong; DeLand, Matthew; Bhartia, Pawan K.
2016-01-01
The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) ozone product requires the determination of cloud height for each event to establish the lower boundary of the profile for the retrieval algorithm. We have created a revised cloud detection algorithm for LP measurements that uses the spectral dependence of the vertical gradient in radiance between two wavelengths in the visible and near-IR spectral regions. This approach provides better discrimination between clouds and aerosols than results obtained using a single wavelength. Observed LP cloud height values show good agreement with coincident Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements.
Broadband Gerchberg-Saxton algorithm for freeform diffractive spectral filter design.
Vorndran, Shelby; Russo, Juan M; Wu, Yuechen; Pelaez, Silvana Ayala; Kostuk, Raymond K
2015-11-30
A multi-wavelength expansion of the Gerchberg-Saxton (GS) algorithm is developed to design and optimize a surface relief Diffractive Optical Element (DOE). The DOE simultaneously diffracts distinct wavelength bands into separate target regions. A description of the algorithm is provided, and parameters that affect filter performance are examined. Performance is based on the spectral power collected within specified regions on a receiver plane. The modified GS algorithm is used to design spectrum splitting optics for CdSe and Si photovoltaic (PV) cells. The DOE has average optical efficiency of 87.5% over the spectral bands of interest (400-710 nm and 710-1100 nm). Simulated PV conversion efficiency is 37.7%, which is 29.3% higher than the efficiency of the better performing PV cell without spectrum splitting optics.
Assignment Of Finite Elements To Parallel Processors
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.
1990-01-01
Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.
Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.
Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki
2015-03-19
Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.
Dogan, S; Babic, N; Gurkan, C; Goksu, A; Marjanovic, D; Hadziavdic, V
2016-12-01
Y-chromosomal haplogroups are sets of ancestrally related paternal lineages, traditionally assigned by the use of Y-chromosomal single nucleotide polymorphism (Y-SNP) markers. An increasingly popular and a less labor-intensive alternative approach has been Y-chromosomal haplogroup assignment based on already available Y-STR data using a variety of different algorithms. In the present study, such in silico haplogroup assignments were made based on 23-loci Y-STR data for 100 unrelated male individuals from the Tuzla Canton, Bosnia and Herzegovina (B&H) using the following four different algorithms: Whit Athey's Haplogroup Predictor, Jim Cullen's World Haplogroup & Haplogroup-I Subclade Predictor, Vadim Urasin's YPredictor and the NevGen Y-DNA Haplogroup Predictor. Prior in-house assessment of these four different algorithms using a previously published dataset (n=132) from B&H with both Y-STR (12-loci) and Y-SNP data suggested haplogroup misassignment rates between 0.76% and 3.02%. Subsequent analyses with the Tuzla Canton population sample revealed only a few differences in the individual haplogroup assignments when using different algorithms. Nevertheless, the resultant Y-chromosomal haplogroup distribution by each method was very similar, where the most prevalent haplogroups observed were I, R and E with their sublineages I2a, R1a and E1b1b, respectively, which is also in accordance with the previously published Y-SNP data for the B&H population. In conclusion, results presented herein not only constitute a concordance study on the four most popular haplogroup assignment algorithms, but they also give a deeper insight into the inter-population differentiation in B&H on the basis of Y haplogroups for the first time. Copyright © 2016 Elsevier GmbH. All rights reserved.
Systematic assignment of thermodynamic constraints in metabolic network models
Kümmel, Anne; Panke, Sven; Heinemann, Matthias
2006-01-01
Background The availability of genome sequences for many organisms enabled the reconstruction of several genome-scale metabolic network models. Currently, significant efforts are put into the automated reconstruction of such models. For this, several computational tools have been developed that particularly assist in identifying and compiling the organism-specific lists of metabolic reactions. In contrast, the last step of the model reconstruction process, which is the definition of the thermodynamic constraints in terms of reaction directionalities, still needs to be done manually. No computational method exists that allows for an automated and systematic assignment of reaction directions in genome-scale models. Results We present an algorithm that – based on thermodynamics, network topology and heuristic rules – automatically assigns reaction directions in metabolic models such that the reaction network is thermodynamically feasible with respect to the production of energy equivalents. It first exploits all available experimentally derived Gibbs energies of formation to identify irreversible reactions. As these thermodynamic data are not available for all metabolites, in a next step, further reaction directions are assigned on the basis of network topology considerations and thermodynamics-based heuristic rules. Briefly, the algorithm identifies reaction subsets from the metabolic network that are able to convert low-energy co-substrates into their high-energy counterparts and thus net produce energy. Our algorithm aims at disabling such thermodynamically infeasible cyclic operation of reaction subnetworks by assigning reaction directions based on a set of thermodynamics-derived heuristic rules. We demonstrate our algorithm on a genome-scale metabolic model of E. coli. The introduced systematic direction assignment yielded 130 irreversible reactions (out of 920 total reactions), which corresponds to about 70% of all irreversible reactions that are required to disable thermodynamically infeasible energy production. Conclusion Although not being fully comprehensive, our algorithm for systematic reaction direction assignment could define a significant number of irreversible reactions automatically with low computational effort. We envision that the presented algorithm is a valuable part of a computational framework that assists the automated reconstruction of genome-scale metabolic models. PMID:17123434
An enhanced DWBA algorithm in hybrid WDM/TDM EPON networks with heterogeneous propagation delays
NASA Astrophysics Data System (ADS)
Li, Chengjun; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng
2011-12-01
An enhanced dynamic wavelength and bandwidth allocation (DWBA) algorithm in hybrid WDM/TDM PON is proposed and experimentally demonstrated. In addition to the fairness of bandwidth allocation, this algorithm also considers the varying propagation delays between ONUs and OLT. The simulation based on MATLAB indicates that the improved algorithm has a better performance compared with some other algorithms.
PRF Ambiguity Detrmination for Radarsat ScanSAR System
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1998-01-01
PRF ambiguity is a potential problem for a spaceborne SAR operated at high frequencies. For a strip mode SAR, there were several approaches to solve this problem. This paper, however, addresses PRF ambiguity determination algorithms suitable for a burst mode SAR system such as the Radarsat ScanSAR. The candidate algorithms include the wavelength diversity algorithm, range look cross correlation algorithm, and multi-PRF algorithm.
NASA Astrophysics Data System (ADS)
Ha, J.; Chung, W.; Shin, S.
2015-12-01
Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.
One of My Favorite Assignments: Automated Teller Machine Simulation.
ERIC Educational Resources Information Center
Oberman, Paul S.
2001-01-01
Describes an assignment for an introductory computer science class that requires the student to write a software program that simulates an automated teller machine. Highlights include an algorithm for the assignment; sample file contents; language features used; assignment variations; and discussion points. (LRW)
Optimal processor assignment for pipeline computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath
1991-01-01
The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.
NASA Technical Reports Server (NTRS)
Wang, Zhien; Heymsfield, Gerald M.; Li, Lihua; Heymsfield, Andrew J.
2005-01-01
An algorithm to retrieve optically thick ice cloud microphysical property profiles is developed by using the GSFC 9.6 GHz ER-2 Doppler Radar (EDOP) and the 94 GHz Cloud Radar System (CRS) measurements aboard the high-altitude ER-2 aircraft. In situ size distribution and total water content data from the CRYSTAL-FACE field campaign are used for the algorithm development. To reduce uncertainty in calculated radar reflectivity factors (Ze) at these wavelengths, coincident radar measurements and size distribution data are used to guide the selection of mass-length relationships and to deal with the density and non-spherical effects of ice crystals on the Ze calculations. The algorithm is able to retrieve microphysical property profiles of optically thick ice clouds, such as, deep convective and anvil clouds, which are very challenging for single frequency radar and lidar. Examples of retrieved microphysical properties for a deep convective clouds are presented, which show that EDOP and CRS measurements provide rich information to study cloud structure and evolution. Good agreement between IWPs derived from an independent submillimeter-wave radiometer, CoSSIR, and dual-wavelength radar measurements indicates accuracy of the IWC retrieved from the two-frequency radar algorithm.
Remote Sensing of Coastal and Inland Waters
NASA Astrophysics Data System (ADS)
De Keukelaere, L.; Sterckx, S.; Adriaensen, S.; Knaeps, E.
2016-02-01
The new generation of satellites (e.g. Landsat 8, HyspIRI, Sentinel 2 and Sentinel 3 …) contain sensors that enable monitoring at increased spatial and/or spectral resolution. This opens a wide range of new opportunities, amongst others improved observation of coastal and inland waters. Algorithms for the pre-processing of these images and the derivation of Level 2 products for these waters need to take into account the specific nature of these environments, with adjacency effects of the nearby land and complex interactions of the optially active substances with varying degrees of turbidity. Here a new atmospheric correction algorithm, OPERA, is presented which can deal with these highly complex environments and which is sensor generic. OPERA accounts for the contribution of adjacency effects and provides surface reflectances for both land and water targets. OPERA is extended with a level 2 water algorithm providing TSM and turbidity estimates for a wide variety of water types. The algorithm is based on a multi wavelength switching approach using shorter wavelengths in low turbid waters and long NIR and SWIR wavelengths for highly and extremely turbid waters. Results are shown for Landsat-8, Sentinel-2 and MERIS for a variety of scenes, validated with field aeronet and turbidity data.
Comparison of PMC measurements from AIM and SBUV/2
NASA Astrophysics Data System (ADS)
Benze, S.; Randall, C.; Deland, M.; Thomas, G.; Rusch, D.; Bailey, S.; Russell, J.; McClintock, W.; Merkel, A.; Jeppesen, C.
2007-12-01
The Aeronomy of Ice in the Mesosphere (AIM) spacecraft, launched on April 25 from Vandenberg Air Force Base, is a satellite mission that explores Polar Mesospheric Clouds (PMCs) in order to find out why they form and why they are changing. Results of this mission will provide new knowledge about the connection between PMCs and the meteorology of the polar mesosphere. The Cloud Imaging and Particle Size (CIPS) instrument is a nadir- viewing instrument from which PMC frequency and brightness can be inferred. It produces panoramic images of scattered radiation at 265 nm with a field of view of 1800 x 800 km and high spatial resolution. This work provides a first comparison of CIPS PMC morphology to concurrent results from the Solar Backscatter Ultraviolet (SBUV/2) instrument, which has provided a 28-year climatology of PMC brightness and frequency. CIPS and SBUV/2 PMC detections are compared for selected days in the 2007 northern hemispheric season. To facilitate comparison, the CIPS footprint of 1x2 km is binned to match the SBUV/2 footprint of 150x150 km at the PMC altitude of 80 km. Because CIPS measures only one wavelength at 265 nm, the SBUV PMC detection algorithm, which normally uses data at five wavelengths between 252-292 nm, is simplified to an algorithm applying just one wavelength. It will be shown that the single wavelength SBUV/2 algorithm gives similar results to the original algorithm. PMC frequency and brightness derived from both CIPS and SBUV/2 using the common algorithm will be compared. Cloud brightness for all latitudes agrees to within 1 percent over the season. In addition, a coincidence analysis of CIPS and all three operational SBUV/2 instruments for the summer 2007 season will be shown.
Combing Visible and Infrared Spectral Tests for Dust Identification
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Levy, Robert; Kleidman, Richard; Remer, Lorraine; Mattoo, Shana
2016-01-01
The MODIS Dark Target aerosol algorithm over Ocean (DT-O) uses spectral reflectance in the visible, near-IR and SWIR wavelengths to determine aerosol optical depth (AOD) and Angstrom Exponent (AE). Even though DT-O does have "dust-like" models to choose from, dust is not identified a priori before inversion. The "dust-like" models are not true "dust models" as they are spherical and do not have enough absorption at short wavelengths, so retrieved AOD and AE for dusty regions tends to be biased. The inference of "dust" is based on postprocessing criteria for AOD and AE by users. Dust aerosol has known spectral signatures in the near-UV (Deep blue), visible, and thermal infrared (TIR) wavelength regions. Multiple dust detection algorithms have been developed over the years with varying detection capabilities. Here, we test a few of these dust detection algorithms, to determine whether they can be useful to help inform the choices made by the DT-O algorithm. We evaluate the following methods: The multichannel imager (MCI) algorithm uses spectral threshold tests in (0.47, 0.64, 0.86, 1.38, 2.26, 3.9, 11.0, 12.0 micrometer) channels and spatial uniformity test [Zhao et al., 2010]. The NOAA dust aerosol index (DAI) uses spectral contrast in the blue channels (412nm and 440nm) [Ciren and Kundragunta, 2014]. The MCI is already included as tests within the "Wisconsin" (MOD35) Cloud mask algorithm.
An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems
Dawson, Kevin J.; Belkhir, Khalid
2009-01-01
Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306
Multicast routing for wavelength-routed WDM networks with dynamic membership
NASA Astrophysics Data System (ADS)
Huang, Nen-Fu; Liu, Te-Lung; Wang, Yao-Tzung; Li, Bo
2000-09-01
Future broadband networks must support integrated services and offer flexible bandwidth usage. In our previous work, we explore the optical link control layer on the top of optical layer that enables the possibility of bandwidth on-demand service directly over wavelength division multiplexed (WDM) networks. Today, more and more applications and services such as video-conferencing software and Virtual LAN service require multicast support over the underlying networks. Currently, it is difficult to provide wavelength multicast over the optical switches without optical/electronic conversions although the conversion takes extra cost. In this paper, based on the proposed wavelength router architecture (equipped with ATM switches to offer O/E and E/O conversions when necessary), a dynamic multicast routing algorithm is proposed to furnish multicast services over WDM networks. The goal is to joint a new group member into the multicast tree so that the cost, including the link cost and the optical/electronic conversion cost, is kept as less as possible. The effectiveness of the proposed wavelength router architecture as well as the dynamic multicast algorithm is evaluated by simulation.
Tehranchi, Amirhossein; Kashyap, Raman
2009-10-12
A wavelength converter based on counterpropagating quasi-phase matched cascaded sum and difference frequency generation in lossy lithium niobate waveguide is numerically evaluated and compared to a single-pass scheme assuming a large pump wavelength difference of 75 nm. A double-pass device is proposed to improve the conversion efficiency while the response flattening is accomplished by increasing the wavelength tuning of one pump. The criteria for the design of the low-loss waveguide length, and the assignment of power in the pumps to achieve the desired efficiency, ripple and bandwidth are presented.
Time Shared Optical Network (TSON): a novel metro architecture for flexible multi-granular services.
Zervas, Georgios S; Triay, Joan; Amaya, Norberto; Qin, Yixuan; Cervelló-Pastor, Cristina; Simeonidou, Dimitra
2011-12-12
This paper presents the Time Shared Optical Network (TSON) as metro mesh network architecture for guaranteed, statistically-multiplexed services. TSON proposes a flexible and tunable time-wavelength assignment along with one-way tree-based reservation and node architecture. It delivers guaranteed sub-wavelength and multi-granular network services without wavelength conversion, time-slice interchange and optical buffering. Simulation results demonstrate high network utilization, fast service delivery, and low end-to-end delay on a contention-free sub-wavelength optical transport network. In addition, implementation complexity in terms of Layer 2 aggregation, grooming and optical switching has been evaluated. © 2011 Optical Society of America
Spatial Distortion of Vibration Modes via Magnetic Correlation of Impurities
NASA Astrophysics Data System (ADS)
Krasniqi, F. S.; Zhong, Y.; Epp, S. W.; Foucar, L.; Trigo, M.; Chen, J.; Reis, D. A.; Wang, H. L.; Zhao, J. H.; Lemke, H. T.; Zhu, D.; Chollet, M.; Fritz, D. M.; Hartmann, R.; Englert, L.; Strüder, L.; Schlichting, I.; Ullrich, J.
2018-03-01
Long wavelength vibrational modes in the ferromagnetic semiconductor Ga0.91 Mn0.09 As are investigated using time resolved x-ray diffraction. At room temperature, we measure oscillations in the x-ray diffraction intensity corresponding to coherent vibrational modes with well-defined wavelengths. When the correlation of magnetic impurities sets in, we observe the transition of the lattice into a disordered state that does not support coherent modes at large wavelengths. Our measurements point toward a magnetically induced broadening of long wavelength vibrational modes in momentum space and their quasilocalization in the real space. More specifically, long wavelength vibrational modes cannot be assigned to a single wavelength but rather should be represented as a superposition of plane waves with different wavelengths. Our findings have strong implications for the phonon-related processes, especially carrier-phonon and phonon-phonon scattering, which govern the electrical conductivity and thermal management of semiconductor-based devices.
Accurate Laboratory Wavelengths of the e 3 Σ-(ν' = 5) - X 1 Σ+(ν'' = 0) Band of 12C16O
NASA Astrophysics Data System (ADS)
Dickenson, G. D.; Nortje, A. C.; Steenkamp, C. M.; Rohwer, E. G.; Du Plessis, A.
2010-05-01
The forbidden singlet-triplet transitions of carbon monoxide (CO) are important in the interpretation of vacuum ultraviolet interstellar absorption spectra and in particular for the measurement of large CO column densities. Twenty rovibronic lines of the e 3Σ-(ν' = 5) - X 1Σ+(ν'' = 0) band of 12 C 16O for which laboratory wavelengths were previously unavailable were identified in laser-induced fluorescence excitation spectra. Wavelengths were assigned to five rovibronic transitions to an average accuracy of 0.0028 Å. A further 15 lines could not be fully resolved and average wavelengths were measured for these groups of closely spaced lines. A wavelength difference of 0.011 ± 0.0028 Å between the measured wavelengths and the calculated wavelengths in the atlas of Eidelsberg & Rostas demonstrates the need for more experimental data on CO.
NASA Astrophysics Data System (ADS)
Cheng, Jinlong; Gao, Zhishan; Bie, Shuyou; Dou, Yimeng; Ni, Ruihu; Yuan, Qun
2018-02-01
Simultaneous dual-wavelength interferometry (SDWI) could extend the measured range of each single-wavelength interferometry. The moiré fringe generated in SDWI indirectly represents the information of the measured long synthetic-wavelength ({λ }{{S}}) phase, thus the phase demodulation is rather arduous. To address this issue, we present a method to convert the moiré fringe pattern into a synthetic-wavelength interferogram (moiré to synthetic-wavelength, MTS). After the square of the moiré fringe pattern in the MTS method, the additive moiré pattern is turned into a multiplicative one. And the synthetic-wavelength interferogram could be obtained by a low-pass filtering in spectrum of the multiplicative moiré fringe pattern. Therefore, when the dual-wavelength interferometer is implemented with the π/2 phase shift at {λ }{{S}}, a sequence of synthetic-wavelength phase-shift interferograms with π/2 phase shift could be obtained after the MTS method processing on the captured moiré fringe patterns. And then the synthetic-wavelength phase could be retrieved by the conventional phase-shift algorithm. Compared with other methods in SDWI, the proposed MTS approach could reduce the restriction of the phase shift and frame numbers for the adoption of the conventional phase-shift algorithm. Following, numerical simulations are executed to evaluate the performance of the MTS method in processing time, frames of interferograms and the phase shift error compensation. And the necessary linear carrier for MTS method is less than 0.11 times of the traditional dual-wavelength spatial-domain Fourier transform method. Finally, the deviations for MTS method in experiment are 0.97% for a step with the height of 7.8 μm and 1.11% for a Fresnel lens with the step height of 6.2328 μm.
Application of a Dynamic Programming Algorithm for Weapon Target Assignment
2016-02-01
25] A . Turan , “Techniques for the Allocation of Resources Under Uncertainty,” Middle Eastern Technical University, Ankara, Turkey, 2012. [26] K...UNCLASSIFIED UNCLASSIFIED Application of a Dynamic Programming Algorithm for Weapon Target Assignment Lloyd Hammond Weapons and...optimisation techniques to support the decision making process. This report documents the methodology used to identify, develop and assess a
Use of Multiangle Satellite Observations to Retrieve Aerosol Properties and Ocean Color
NASA Technical Reports Server (NTRS)
Martonchik, John V.; Diner, David; Khan, Ralph
2005-01-01
A new technique is described for retrieving aerosol over ocean water and the associated ocean color using multiangle satellite observations. Unlike current satellite aerosol retrieval algorithms which only utilize observations at red wavelengths and longer, with the assumption that these wavelengths have a negligible ocean (water-leaving radiance), this new algorithm uses all available spectral bands and simultaneously retrieves both aerosol properties and the spectral ocean color. We show some results of case studies using MISR data, performed over different water conditions (coastal water, blooms, and open water).
Key on demand (KoD) for software-defined optical networks secured by quantum key distribution (QKD).
Cao, Yuan; Zhao, Yongli; Colman-Meixner, Carlos; Yu, Xiaosong; Zhang, Jie
2017-10-30
Software-defined optical networking (SDON) will become the next generation optical network architecture. However, the optical layer and control layer of SDON are vulnerable to cyberattacks. While, data encryption is an effective method to minimize the negative effects of cyberattacks, secure key interchange is its major challenge which can be addressed by the quantum key distribution (QKD) technique. Hence, in this paper we discuss the integration of QKD with WDM optical networks to secure the SDON architecture by introducing a novel key on demand (KoD) scheme which is enabled by a novel routing, wavelength and key assignment (RWKA) algorithm. The QKD over SDON with KoD model follows two steps to provide security: i) quantum key pools (QKPs) construction for securing the control channels (CChs) and data channels (DChs); ii) the KoD scheme uses RWKA algorithm to allocate and update secret keys for different security requirements. To test our model, we define a security probability index which measures the security gain in CChs and DChs. Simulation results indicate that the security performance of CChs and DChs can be enhanced by provisioning sufficient secret keys in QKPs and performing key-updating considering potential cyberattacks. Also, KoD is beneficial to achieve a positive balance between security requirements and key resource usage.
McDonnell, Diana D; Graham, Carrie L
2015-03-01
In 2011 California began transitioning approximately 340,000 seniors and people with disabilities from Medicaid fee-for-service (FFS) to Medicaid managed care plans. When beneficiaries did not actively choose a managed care plan, the state assigned them to one using an algorithm based on their previous FFS primary and specialty care use. When no clear link could be established, beneficiaries were assigned by default to a managed care plan based on weighted randomization. In this article we report the results of a telephone survey of 1,521 seniors and people with disabilities enrolled in Medi-Cal (California Medicaid) and who were recently transitioned to a managed care plan. We found that 48 percent chose their own plan, 11 percent were assigned to a plan by algorithm, and 41 percent were assigned to a plan by default. People in the latter two categories reported being similarly less positive about their experiences compared to beneficiaries who actively chose a plan. Many states in addition to California are implementing mandatory transitions of Medicaid-only beneficiaries to managed care plans. Our results highlight the importance of encouraging beneficiaries to actively choose their health plan; when beneficiaries do not choose, states should employ robust intelligent assignment algorithms. Project HOPE—The People-to-People Health Foundation, Inc.
An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming
2017-02-01
In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.
USDA-ARS?s Scientific Manuscript database
Several bio-optical algorithms were developed to estimate the chlorophyll-a (Chl-a) and phycocyanin (PC) concentrations in inland waters. This study aimed at identifying the influence of the algorithm parameters and wavelength bands on output variables and searching optimal parameter values. The opt...
NASA Astrophysics Data System (ADS)
Jarvis, Jan; Haertelt, Marko; Hugger, Stefan; Butschek, Lorenz; Fuchs, Frank; Ostendorf, Ralf; Wagner, Joachim; Beyerer, Juergen
2017-04-01
In this work we present data analysis algorithms for detection of hazardous substances in hyperspectral observations acquired using active mid-infrared (MIR) backscattering spectroscopy. We present a novel background extraction algorithm based on the adaptive target generation process proposed by Ren and Chang called the adaptive background generation process (ABGP) that generates a robust and physically meaningful set of background spectra for operation of the well-known adaptive matched subspace detection (AMSD) algorithm. It is shown that the resulting AMSD-ABGP detection algorithm competes well with other widely used detection algorithms. The method is demonstrated in measurement data obtained by two fundamentally different active MIR hyperspectral data acquisition devices. A hyperspectral image sensor applicable in static scenes takes a wavelength sequential approach to hyperspectral data acquisition, whereas a rapid wavelength-scanning single-element detector variant of the same principle uses spatial scanning to generate the hyperspectral observation. It is shown that the measurement timescale of the latter is sufficient for the application of the data analysis algorithms even in dynamic scenarios.
The Goddard Profiling Algorithm (GPROF): Description and Current Applications
NASA Technical Reports Server (NTRS)
Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea
2004-01-01
Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.
Wang, C. L.
2016-05-17
On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA.more » Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less
Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging
Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S.; Cho, Hyunjeong; Cho, Byoung-Kwan
2015-01-01
Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400–1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557–701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce. PMID:26610510
Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging.
Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S; Cho, Hyunjeong; Cho, Byoung-Kwan
2015-11-20
Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400-1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557-701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce.
Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images
Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki
2015-01-01
Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767
Flexible Multi agent Algorithm for Distributed Decision Making
2015-01-01
How, J. P. Consensus - Based Auction Approaches for Decentralized task Assignment. Proceedings of the AIAA Guidance, Navigation, and Control...G. ; Kim, Y. Market- based Decentralized Task Assignment for Cooperative UA V Mission Including Rendezvous. Proceedings of the AIAA Guidance...scalable and adaptable to a variety of specific mission tasks . Additionally, the algorithm could easily be adapted for use on land or sea- based systems
[Analysis of visible extinction spectrum of particle system and selection of optimal wavelength].
Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin
2008-09-01
In the total light scattering particle sizing technique, the extinction spectrum of particle system contains some information about the particle size and refractive index. The visible extinction spectra of the common monomodal and biomodal R-R particle size distribution were computed, and the variation in the visible extinction spectrum with the particle size and refractive index was analyzed. The corresponding wavelengths were selected as the measurement wavelengths at which the second order differential extinction spectrum was discontinuous. Furthermore, the minimum and the maximum wavelengths in the visible region were also selected as the measurement wavelengths. The genetic algorithm was used as the inversion method under the dependent model The computer simulation and experiments illustrate that it is feasible to make an analysis of the extinction spectrum and use this selection method of the optimal wavelength in the total light scattering particle sizing. The rough contour of the particle size distribution can be determined after the analysis of visible extinction spectrum, so the search range of the particle size parameter is reduced in the optimal algorithm, and then a more accurate inversion result can be obtained using the selection method. The inversion results of monomodal and biomodal distribution are all still satisfactory when 1% stochastic noise is put in the transmission extinction measurement values.
Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks
NASA Astrophysics Data System (ADS)
Luo, Hongbin; Li, Lemin; Yu, Hongfang
2006-12-01
Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.
NASA Technical Reports Server (NTRS)
Schultz, Howard
1990-01-01
The retrieval algorithm for spaceborne scatterometry proposed by Schultz (1985) is extended. A circular median filter (CMF) method is presented, which operates on wind directions independently of wind speed, removing any implicit wind speed dependence. A cell weighting scheme is included in the algorithm, permitting greater weights to be assigned to more reliable data. The mathematical properties of the ambiguous solutions to the wind retrieval problem are reviewed. The CMF algorithm is tested on twelve simulated data sets. The effects of spatially correlated likelihood assignment errors on the performance of the CMF algorithm are examined. Also, consideration is given to a wind field smoothing technique that uses a CMF.
Algorithmic Coordination in Robotic Networks
2010-11-29
appropriate performance, robustness and scalability properties for various task allocation , surveillance, and information gathering applications is...networking, we envision designing and analyzing algorithms with appropriate performance, robustness and scalability properties for various task ...distributed algorithms for target assignments; based on the classic auction algorithms in static networks, we intend to design efficient algorithms in worst
Adaptive Control Strategies for Flexible Robotic Arm
NASA Technical Reports Server (NTRS)
Bialasiewicz, Jan T.
1996-01-01
The control problem of a flexible robotic arm has been investigated. The control strategies that have been developed have a wide application in approaching the general control problem of flexible space structures. The following control strategies have been developed and evaluated: neural self-tuning control algorithm, neural-network-based fuzzy logic control algorithm, and adaptive pole assignment algorithm. All of the above algorithms have been tested through computer simulation. In addition, the hardware implementation of a computer control system that controls the tip position of a flexible arm clamped on a rigid hub mounted directly on the vertical shaft of a dc motor, has been developed. An adaptive pole assignment algorithm has been applied to suppress vibrations of the described physical model of flexible robotic arm and has been successfully tested using this testbed.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J
2009-06-01
As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.
Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian
2015-10-23
The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.
Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin
2008-05-01
For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
NASA Astrophysics Data System (ADS)
Wang, Fu; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Tian, Qinghua; Zhang, Qi; Rao, Lan; Tian, Feng; Luo, Biao; Liu, Yingjun; Tang, Bao
2016-10-01
Elastic Optical Networks are considered to be a promising technology for future high-speed network. In this paper, we propose a RSA algorithm based on the ant colony optimization of minimum consecutiveness loss (ACO-MCL). Based on the effect of the spectrum consecutiveness loss on the pheromone in the ant colony optimization, the path and spectrum of the minimal impact on the network are selected for the service request. When an ant arrives at the destination node from the source node along a path, we assume that this path is selected for the request. We calculate the consecutiveness loss of candidate-neighbor link pairs along this path after the routing and spectrum assignment. Then, the networks update the pheromone according to the value of the consecutiveness loss. We save the path with the smallest value. After multiple iterations of the ant colony optimization, the final selection of the path is assigned for the request. The algorithms are simulated in different networks. The results show that ACO-MCL algorithm performs better in blocking probability and spectrum efficiency than other algorithms. Moreover, the ACO-MCL algorithm can effectively decrease spectrum fragmentation and enhance available spectrum consecutiveness. Compared with other algorithms, the ACO-MCL algorithm can reduce the blocking rate by at least 5.9% in heavy load.
Interactive visual exploration and refinement of cluster assignments.
Kern, Michael; Lex, Alexander; Gehlenborg, Nils; Johnson, Chris R
2017-09-12
With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data. In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes. Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.
Use of hyperspectral imaging technology to develop a diagnostic support system for gastric cancer
NASA Astrophysics Data System (ADS)
Goto, Atsushi; Nishikawa, Jun; Kiyotoki, Shu; Nakamura, Munetaka; Nishimura, Junichi; Okamoto, Takeshi; Ogihara, Hiroyuki; Fujita, Yusuke; Hamamoto, Yoshihiko; Sakaida, Isao
2015-01-01
Hyperspectral imaging (HSI) is a new technology that obtains spectroscopic information and renders it in image form. This study examined the difference in the spectral reflectance (SR) of gastric tumors and normal mucosa recorded with a hyperspectral camera equipped with HSI technology and attempted to determine the specific wavelength that is useful for the diagnosis of gastric cancer. A total of 104 gastric tumors removed by endoscopic submucosal dissection from 96 patients at Yamaguchi University Hospital were recorded using a hyperspectral camera. We determined the optimal wavelength and the cut-off value for differentiating tumors from normal mucosa to establish a diagnostic algorithm. We also attempted to highlight tumors by image processing using the hyperspectral camera's analysis software. A wavelength of 770 nm and a cut-off value of 1/4 the corrected SR were selected as the respective optimal wavelength and cut-off values. The rates of sensitivity, specificity, and accuracy of the algorithm's diagnostic capability were 71%, 98%, and 85%, respectively. It was possible to enhance tumors by image processing at the 770-nm wavelength. HSI can be used to measure the SR in gastric tumors and to differentiate between tumorous and normal mucosa.
Voltage scheduling for low power/energy
NASA Astrophysics Data System (ADS)
Manzak, Ali
2001-07-01
Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.
NASA Astrophysics Data System (ADS)
Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas
2016-03-01
Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.
NASA Technical Reports Server (NTRS)
Russell, Philip B.; Bauman, Jill J.
2000-01-01
This SAGE II Science Team task focuses on the development of a multi-wavelength, multi- sensor Look-Up-Table (LUT) algorithm for retrieving information about stratospheric aerosols from global satellite-based observations of particulate extinction. The LUT algorithm combines the 4-wavelength SAGE II extinction measurements (0.385 <= lambda <= 1.02 microns) with the 7.96 micron and 12.82 micron extinction measurements from the Cryogenic Limb Array Etalon Spectrometer (CLAES) instrument, thus increasing the information content available from either sensor alone. The algorithm uses the SAGE II/CLAES composite spectra in month-latitude-altitude bins to retrieve values and uncertainties of particle effective radius R(sub eff), surface area S, volume V and size distribution width sigma(sub g).
PLA realizations for VLSI state machines
NASA Technical Reports Server (NTRS)
Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.
1990-01-01
A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.
Ultrafast excited-state dynamics of 2,5-dimethylpyrrole.
Yang, Dongyuan; Min, Yanjun; Chen, Zhichao; He, Zhigang; Yuan, Kaijun; Dai, Dongxu; Yang, Xueming; Wu, Guorong
2018-04-17
The ultrafast excited-state dynamics of 2,5-dimethylpyrrole following excitation at wavelengths in the range of 265.7-216.7 nm is studied using the time-resolved photoelectron imaging method. It is found that excitation at longer wavelengths (265.7-250.2 nm) results in the population of the S1(1πσ*) state, which decays out of the photoionization window in about 90 fs. At shorter pump wavelengths (242.1-216.7 nm), the assignments are less clear-cut. We tentatively assign the initially photoexcited state(s) to the 1π3p Rydberg state(s) which has lifetimes of 159 ± 20, 125 ± 15, 102 ± 10 and 88 ± 10 fs for the pump wavelengths of 242.1, 238.1, 232.6 and 216.7 nm, respectively. Internal conversion to the S1(1πσ*) state represents at most a minor decay channel. The methyl substitution effects on the decay dynamics of the excited states of pyrrole are also discussed. Methyl substitution on the pyrrole ring seems to enhance the direct internal conversion from the 1π3p Rydberg state to the ground state, while methyl substitution on the N atom has less influence and the internal conversion to the S1(πσ*) state represents a main channel.
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Nakajima, T.; Morimoto, S.; Takenaka, H.
2014-12-01
We have developed a new satellite remote sensing algorithm to retrieve the aerosol optical characteristics using multi-wavelength and multi-pixel information of satellite imagers (MWP method). In this algorithm, the inversion method is a combination of maximum a posteriori (MAP) method (Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, with the progress of computing technique, this method has being combined with the direct radiation transfer calculation numerically solved by each iteration step of the non-linear inverse problem, without using LUT (Look Up Table) with several constraints.Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine and coarse mode particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area.We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. The result of the experiment showed that AOTs of fine mode and coarse mode, soot fraction and ground surface albedo are successfully retrieved within expected accuracy. We discuss the accuracy of the algorithm for various land surface types. Then, we applied this algorithm to GOSAT/CAI imager data, and we compared retrieved and surface-observed AOTs at the CAI pixel closest to an AERONET (Aerosol Robotic Network) or SKYNET site in each region. Comparison at several sites in urban area indicated that AOTs retrieved by our method are in agreement with surface-observed AOT within ±0.066.Our future work is to extend the algorithm for analysis of AGEOS-II/GLI and GCOM/C-SGLI data.
Two-wavelength Lidar inversion algorithm for determining planetary boundary layer height
NASA Astrophysics Data System (ADS)
Liu, Boming; Ma, Yingying; Gong, Wei; Jian, Yang; Ming, Zhang
2018-02-01
This study proposes a two-wavelength Lidar inversion algorithm to determine the boundary layer height (BLH) based on the particles clustering. Color ratio and depolarization ratio are used to analyze the particle distribution, based on which the proposed algorithm can overcome the effects of complex aerosol layers to calculate the BLH. The algorithm is used to determine the top of the boundary layer under different mixing state. Experimental results demonstrate that the proposed algorithm can determine the top of the boundary layer even in a complex case. Moreover, it can better deal with the weak convection conditions. Finally, experimental data from June 2015 to December 2015 were used to verify the reliability of the proposed algorithm. The correlation between the results of the proposed algorithm and the manual method is R2 = 0.89 with a RMSE of 131 m and mean bias of 49 m; the correlation between the results of the ideal profile fitting method and the manual method is R2 = 0.64 with a RMSE of 270 m and a mean bias of 165 m; and the correlation between the results of the wavelet covariance transform method and manual method is R2 = 0.76, with a RMSE of 196 m and mean bias of 23 m. These findings indicate that the proposed algorithm has better reliability and stability than traditional algorithms.
Spatial Distortion of Vibration Modes via Magnetic Correlation of Impurities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasniqi, Faton S.; Zhong, Yinpeng; Epp, S. W.
Long wavelength vibrational modes in the ferromagnetic semiconductor Ga 0.91M n0.09As are investigated using time resolved x-ray diffraction. At room temperature, we measure oscillations in the x-ray diffraction intensity corresponding to coherent vibrational modes with well-defined wavelengths. When the correlation of magnetic impurities sets in, we observe the transition of the lattice into a disordered state that does not support coherent modes at large wavelengths. Our measurements point toward a magnetically induced broadening of long wavelength vibrational modes in momentum space and their quasilocalization in the real space. More specifically, long wavelength vibrational modes cannot be assigned to a singlemore » wavelength but rather should be represented as a superposition of plane waves with different wavelengths. Lastly, our findings have strong implications for the phonon-related processes, especially carrier-phonon and phonon-phonon scattering, which govern the electrical conductivity and thermal management of semiconductor-based devices.« less
Spatial Distortion of Vibration Modes via Magnetic Correlation of Impurities
Krasniqi, Faton S.; Zhong, Yinpeng; Epp, S. W.; ...
2018-03-08
Long wavelength vibrational modes in the ferromagnetic semiconductor Ga 0.91M n0.09As are investigated using time resolved x-ray diffraction. At room temperature, we measure oscillations in the x-ray diffraction intensity corresponding to coherent vibrational modes with well-defined wavelengths. When the correlation of magnetic impurities sets in, we observe the transition of the lattice into a disordered state that does not support coherent modes at large wavelengths. Our measurements point toward a magnetically induced broadening of long wavelength vibrational modes in momentum space and their quasilocalization in the real space. More specifically, long wavelength vibrational modes cannot be assigned to a singlemore » wavelength but rather should be represented as a superposition of plane waves with different wavelengths. Lastly, our findings have strong implications for the phonon-related processes, especially carrier-phonon and phonon-phonon scattering, which govern the electrical conductivity and thermal management of semiconductor-based devices.« less
Tahara, Tatsuki; Otani, Reo; Omae, Kaito; Gotohda, Takuya; Arai, Yasuhiko; Takaki, Yasuhiro
2017-05-15
We propose multiwavelength in-line digital holography with wavelength-multiplexed phase-shifted holograms and arbitrary symmetric phase shifts. We use phase-shifting interferometry selectively extracting wavelength information to reconstruct multiwavelength object waves separately from wavelength-multiplexed monochromatic images. The proposed technique obtains systems of equations for real and imaginary parts of multiwavelength object waves from the holograms by introducing arbitrary symmetric phase shifts. Then, the technique derives each complex amplitude distribution of each object wave selectively and analytically by solving the two systems of equations. We formulate the algorithm in the case of an arbitrary number of wavelengths and confirm its validity numerically and experimentally in the cases where the number of wavelengths is two and three.
Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian
2015-01-01
The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650
Single-machine common/slack due window assignment problems with linear decreasing processing times
NASA Astrophysics Data System (ADS)
Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia
2017-08-01
This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.
Investigation of the reconstruction accuracy of guided wave tomography using full waveform inversion
NASA Astrophysics Data System (ADS)
Rao, Jing; Ratassepp, Madis; Fan, Zheng
2017-07-01
Guided wave tomography is a promising tool to accurately determine the remaining wall thicknesses of corrosion damages, which are among the major concerns for many industries. Full Waveform Inversion (FWI) algorithm is an attractive guided wave tomography method, which uses a numerical forward model to predict the waveform of guided waves when propagating through corrosion defects, and an inverse model to reconstruct the thickness map from the ultrasonic signals captured by transducers around the defect. This paper discusses the reconstruction accuracy of the FWI algorithm on plate-like structures by using simulations as well as experiments. It was shown that this algorithm can obtain a resolution of around 0.7 wavelengths for defects with smooth depth variations from the acoustic modeling data, and about 1.5-2 wavelengths from the elastic modeling data. Further analysis showed that the reconstruction accuracy is also dependent on the shape of the defect. It was demonstrated that the algorithm maintains the accuracy in the case of multiple defects compared to conventional algorithms based on Born approximation.
The West Midlands breast cancer screening status algorithm - methodology and use as an audit tool.
Lawrence, Gill; Kearins, Olive; O'Sullivan, Emma; Tappenden, Nancy; Wallis, Matthew; Walton, Jackie
2005-01-01
To illustrate the ability of the West Midlands breast screening status algorithm to assign a screening status to women with malignant breast cancer, and its uses as a quality assurance and audit tool. Breast cancers diagnosed between the introduction of the National Health Service [NHS] Breast Screening Programme and 31 March 2001 were obtained from the West Midlands Cancer Intelligence Unit (WMCIU). Screen-detected tumours were identified via breast screening units, and the remaining cancers were assigned to one of eight screening status categories. Multiple primaries and recurrences were excluded. A screening status was assigned to 14,680 women (96% of the cohort examined), 110 cancers were not registered at the WMCIU and the cohort included 120 screen-detected recurrences. The West Midlands breast screening status algorithm is a robust simple tool which can be used to derive data to evaluate the efficacy and impact of the NHS Breast Screening Programme.
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923
NASA Astrophysics Data System (ADS)
Wang, Fu; Liu, Bo; Zhang, Lijia; Zhang, Qi; Tian, Qinghua; Tian, Feng; Rao, Lan; Xin, Xiangjun
2017-07-01
Elastic software-defined optical networks greatly improve the flexibility of the optical switching network while it has brought challenges to the routing and spectrum assignment (RSA). A multilayer virtual topology model is proposed to solve RSA problems. Two RSA algorithms based on the virtual topology are proposed, which are the ant colony optimization (ACO) algorithm of minimum consecutiveness loss and the ACO algorithm of maximum spectrum consecutiveness. Due to the computing power of the control layer in the software-defined network, the routing algorithm avoids the frequent link-state information between routers. Based on the effect of the spectrum consecutiveness loss on the pheromone in the ACO, the path and spectrum of the minimal impact on the network are selected for the service request. The proposed algorithms have been compared with other algorithms. The results show that the proposed algorithms can reduce the blocking rate by at least 5% and perform better in spectrum efficiency. Moreover, the proposed algorithms can effectively decrease spectrum fragmentation and enhance available spectrum consecutiveness.
Lamberti, Alfredo; Vanlanduit, Steve; De Pauw, Ben; Berghmans, Francis
2014-01-01
The working principle of fiber Bragg grating (FBG) sensors is mostly based on the tracking of the Bragg wavelength shift. To accomplish this task, different algorithms have been proposed, from conventional maximum and centroid detection algorithms to more recently-developed correlation-based techniques. Several studies regarding the performance of these algorithms have been conducted, but they did not take into account spectral distortions, which appear in many practical applications. This paper addresses this issue and analyzes the performance of four different wavelength tracking algorithms (maximum detection, centroid detection, cross-correlation and fast phase-correlation) when applied to distorted FBG spectra used for measuring dynamic loads. Both simulations and experiments are used for the analyses. The dynamic behavior of distorted FBG spectra is simulated using the transfer-matrix approach, and the amount of distortion of the spectra is quantified using dedicated distortion indices. The algorithms are compared in terms of achievable precision and accuracy. To corroborate the simulation results, experiments were conducted using three FBG sensors glued on a steel plate and subjected to a combination of transverse force and vibration loads. The analysis of the results showed that the fast phase-correlation algorithm guarantees the best combination of versatility, precision and accuracy. PMID:25521386
NASA Astrophysics Data System (ADS)
Li, Yan; Collier, Martin
2007-11-01
Wavelength-routed networks have received enormous attention due to the fact that they are relatively simple to implement and implicitly offer Quality of Service (QoS) guarantees. However, they suffer from a bandwidth inefficiency problem and require complex Routing and Wavelength Assignment (RWA). Most attempts to address the above issues exploit the joint use of WDM and TDM technologies. The resultant TDM-based wavelength-routed networks partition the wavelength bandwidth into fixed-length time slots organized as a fixed-length frame. Multiple connections can thus time-share a wavelength and the grooming of their traffic leads to better bandwidth utilization. The capability of switching in both wavelength and time domains in such networks also mitigates the RWA problem. However, TMD-based wavelength-routed networks work in synchronous mode and strict synchronization among all network nodes is required. Global synchronization for all-optical networks which operate at extremely high speed is technically challenging, and deploying an optical synchronizer for each wavelength involves considerable cost. An Optical Slotted Circuit Switching (OSCS) architecture is proposed in this paper. In an OSCS network, slotted circuits are created to better utilize the wavelength bandwidth than in classic wavelength-routed networks. The operation of the protocol is such as to avoid the need for global synchronization required by TDM-based wavelength-routed networks.
Convergence of the Ponderomotive Guiding Center approximation in the LWFA
NASA Astrophysics Data System (ADS)
Silva, Thales; Vieira, Jorge; Helm, Anton; Fonseca, Ricardo; Silva, Luis
2017-10-01
Plasma accelerators arose as potential candidates for future accelerator technology in the last few decades because of its predicted compactness and low cost. One of the proposed designs for plasma accelerators is based on Laser Wakefield Acceleration (LWFA). However, simulations performed for such systems have to solve the laser wavelength which is orders of magnitude lower than the plasma wavelength. In this context, the Ponderomotive Guiding Center (PGC) algorithm for particle-in-cell (PIC) simulations is a potent tool. The laser is approximated by its envelope which leads to a speed-up of around 100 times because the laser wavelength is not solved. The plasma response is well understood, and comparison with the full PIC code show an excellent agreement. However, for LWFA, the convergence of the self-injected beam parameters, such as energy and charge, was not studied before and has vital importance for the use of the algorithm in predicting the beam parameters. Our goal is to do a thorough investigation of the stability and convergence of the algorithm in situations of experimental relevance for LWFA. To this end, we perform simulations using the PGC algorithm implemented in the PIC code OSIRIS. To verify the PGC predictions, we compare the results with full PIC simulations. This project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant agreement No 653782.
Wavelength interrogation of fiber Bragg grating sensors using tapered hollow Bragg waveguides.
Potts, C; Allen, T W; Azar, A; Melnyk, A; Dennison, C R; DeCorby, R G
2014-10-15
We describe an integrated system for wavelength interrogation, which uses tapered hollow Bragg waveguides coupled to an image sensor. Spectral shifts are extracted from the wavelength dependence of the light radiated at mode cutoff. Wavelength shifts as small as ~10 pm were resolved by employing a simple peak detection algorithm. Si/SiO₂-based cladding mirrors enable a potential operational range of several hundred nanometers in the 1550 nm wavelength region for a taper length of ~1 mm. Interrogation of a strain-tuned grating was accomplished using a broadband amplified spontaneous emission (ASE) source, and potential for single-chip interrogation of multiplexed sensor arrays is demonstrated.
Algorithms for selecting informative marker panels for population assignment.
Rosenberg, Noah A
2005-11-01
Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.
Hyperspectral imaging for melanoma screening
NASA Astrophysics Data System (ADS)
Martin, Justin; Krueger, James; Gareau, Daniel
2014-03-01
The 5-year survival rate for patients diagnosed with Melanoma, a deadly form of skin cancer, in its latest stages is about 15%, compared to over 90% for early detection and treatment. We present an imaging system and algorithm that can be used to automatically generate a melanoma risk score to aid clinicians in the early identification of this form of skin cancer. Our system images the patient's skin at a series of different wavelengths and then analyzes several key dermoscopic features to generate this risk score. We have found that shorter wavelengths of light are sensitive to information in the superficial areas of the skin while longer wavelengths can be used to gather information at greater depths. This accompanying diagnostic computer algorithm has demonstrated much higher sensitivity and specificity than the currently commercialized system in preliminary trials and has the potential to improve the early detection of melanoma.
An efficient randomized algorithm for contact-based NMR backbone resonance assignment.
Kamisetty, Hetunandan; Bailey-Kellogg, Chris; Pandurangan, Gopal
2006-01-15
Backbone resonance assignment is a critical bottleneck in studies of protein structure, dynamics and interactions by nuclear magnetic resonance (NMR) spectroscopy. A minimalist approach to assignment, which we call 'contact-based', seeks to dramatically reduce experimental time and expense by replacing the standard suite of through-bond experiments with the through-space (nuclear Overhauser enhancement spectroscopy, NOESY) experiment. In the contact-based approach, spectral data are represented in a graph with vertices for putative residues (of unknown relation to the primary sequence) and edges for hypothesized NOESY interactions, such that observed spectral peaks could be explained if the residues were 'close enough'. Due to experimental ambiguity, several incorrect edges can be hypothesized for each spectral peak. An assignment is derived by identifying consistent patterns of edges (e.g. for alpha-helices and beta-sheets) within a graph and by mapping the vertices to the primary sequence. The key algorithmic challenge is to be able to uncover these patterns even when they are obscured by significant noise. This paper develops, analyzes and applies a novel algorithm for the identification of polytopes representing consistent patterns of edges in a corrupted NOESY graph. Our randomized algorithm aggregates simplices into polytopes and fixes inconsistencies with simple local modifications, called rotations, that maintain most of the structure already uncovered. In characterizing the effects of experimental noise, we employ an NMR-specific random graph model in proving that our algorithm gives optimal performance in expected polynomial time, even when the input graph is significantly corrupted. We confirm this analysis in simulation studies with graphs corrupted by up to 500% noise. Finally, we demonstrate the practical application of the algorithm on several experimental beta-sheet datasets. Our approach is able to eliminate a large majority of noise edges and to uncover large consistent sets of interactions. Our algorithm has been implemented in the platform-independent Python code. The software can be freely obtained for academic use by request from the authors.
Combined Dust Detection Algorithm by Using MODIS Infrared Channels over East Asia
NASA Technical Reports Server (NTRS)
Park, Sang Seo; Kim, Jhoon; Lee, Jaehwa; Lee, Sukjo; Kim, Jeong Soo; Chang, Lim Seok; Ou, Steve
2014-01-01
A new dust detection algorithm is developed by combining the results of multiple dust detectionmethods using IR channels onboard the MODerate resolution Imaging Spectroradiometer (MODIS). Brightness Temperature Difference (BTD) between two wavelength channels has been used widely in previous dust detection methods. However, BTDmethods have limitations in identifying the offset values of the BTDto discriminate clear-sky areas. The current algorithm overcomes the disadvantages of previous dust detection methods by considering the Brightness Temperature Ratio (BTR) values of the dual wavelength channels with 30-day composite, the optical properties of the dust particles, the variability of surface properties, and the cloud contamination. Therefore, the current algorithm shows improvements in detecting the dust loaded region over land during daytime. Finally, the confidence index of the current dust algorithm is shown in 10 × 10 pixels of the MODIS observations. From January to June, 2006, the results of the current algorithm are within 64 to 81% of those found using the fine mode fraction (FMF) and aerosol index (AI) from the MODIS and Ozone Monitoring Instrument (OMI). The agreement between the results of the current algorithm and the OMI AI over the non-polluted land also ranges from 60 to 67% to avoid errors due to the anthropogenic aerosol. In addition, the developed algorithm shows statistically significant results at four AErosol RObotic NETwork (AERONET) sites in East Asia.
NASA Astrophysics Data System (ADS)
Klaessens, John H. G. M.; Hopman, Jeroen C. W.; Liem, K. Djien; de Roode, Rowland; Verdaasdonk, Rudolf M.; Thijssen, Johan M.
2008-02-01
Continuous wave Near Infrared Spectroscopy is a well known non invasive technique for measuring changes in tissue oxygenation. Absorption changes (ΔO2Hb and ΔHHb) are calculated from the light attenuations using the modified Lambert Beer equation. Generally, the concentration changes are calculated relative to the concentration at a starting point in time (delta time method). It is also possible, under certain assumptions, to calculate the concentrations by subtracting the equations at different wavelengths (delta wavelength method). We derived a new algorithm and will show the possibilities and limitations. In the delta wavelength method, the assumption is that the oxygen independent attenuation term will be eliminated from the formula even if its value changes in time, we verified the results with the classical delta time method using extinction coefficients from different literature sources for the wavelengths 767nm, 850nm and 905nm. The different methods of calculating concentration changes were applied to the data collected from animal experiments. The animals (lambs) were in a stable normoxic condition; stepwise they were made hypoxic and thereafter they returned to normoxic condition. The two algorithms were also applied for measuring two dimensional blood oxygen saturation changes in human skin tissue. The different oxygen saturation levels were induced by alterations in the respiration and by temporary arm clamping. The new delta wavelength method yielded in a steady state measurement the same changes in oxy and deoxy hemoglobin as the classical delta time method. The advantage of the new method is the independence of eventual variation of the oxygen independent attenuations in time.
Normalized Cut Algorithm for Automated Assignment of Protein Domains
NASA Technical Reports Server (NTRS)
Samanta, M. P.; Liang, S.; Zha, H.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We present a novel computational method for automatic assignment of protein domains from structural data. At the core of our algorithm lies a recently proposed clustering technique that has been very successful for image-partitioning applications. This grap.,l-theory based clustering method uses the notion of a normalized cut to partition. an undirected graph into its strongly-connected components. Computer implementation of our method tested on the standard comparison set of proteins from the literature shows a high success rate (84%), better than most existing alternative In addition, several other features of our algorithm, such as reliance on few adjustable parameters, linear run-time with respect to the size of the protein and reduced complexity compared to other graph-theory based algorithms, would make it an attractive tool for structural biologists.
Algorithmic Case Pedagogy, Learning and Gender
ERIC Educational Resources Information Center
Bromley, Robert; Huang, Zhenyu
2015-01-01
Great investment has been made in developing algorithmically-based cases within online homework management systems. This has been done because publishers are convinced that textbook adoption decisions are influenced by the incorporation of these systems within their products. These algorithmic assignments are thought to promote learning while…
On marker-based parentage verification via non-linear optimization.
Boerner, Vinzent
2017-06-15
Parentage verification by molecular markers is mainly based on short tandem repeat markers. Single nucleotide polymorphisms (SNPs) as bi-allelic markers have become the markers of choice for genotyping projects. Thus, the subsequent step is to use SNP genotypes for parentage verification as well. Recent developments of algorithms such as evaluating opposing homozygous SNP genotypes have drawbacks, for example the inability of rejecting all animals of a sample of potential parents. This paper describes an algorithm for parentage verification by constrained regression which overcomes the latter limitation and proves to be very fast and accurate even when the number of SNPs is as low as 50. The algorithm was tested on a sample of 14,816 animals with 50, 100 and 500 SNP genotypes randomly selected from 40k genotypes. The samples of putative parents of these animals contained either five random animals, or four random animals and the true sire. Parentage assignment was performed by ranking of regression coefficients, or by setting a minimum threshold for regression coefficients. The assignment quality was evaluated by the power of assignment (P[Formula: see text]) and the power of exclusion (P[Formula: see text]). If the sample of putative parents contained the true sire and parentage was assigned by coefficient ranking, P[Formula: see text] and P[Formula: see text] were both higher than 0.99 for the 500 and 100 SNP genotypes, and higher than 0.98 for the 50 SNP genotypes. When parentage was assigned by a coefficient threshold, P[Formula: see text] was higher than 0.99 regardless of the number of SNPs, but P[Formula: see text] decreased from 0.99 (500 SNPs) to 0.97 (100 SNPs) and 0.92 (50 SNPs). If the sample of putative parents did not contain the true sire and parentage was rejected using a coefficient threshold, the algorithm achieved a P[Formula: see text] of 1 (500 SNPs), 0.99 (100 SNPs) and 0.97 (50 SNPs). The algorithm described here is easy to implement, fast and accurate, and is able to assign parentage using genomic marker data with a size as low as 50 SNPs.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network.
Choi, Sangil; Park, Jong Hyuk
2016-12-02
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM.
Lakbub, Jude C; Su, Xiaomeng; Zhu, Zhikai; Patabandige, Milani W; Hua, David; Go, Eden P; Desaire, Heather
2017-08-04
The glycopeptide analysis field is tightly constrained by a lack of effective tools that translate mass spectrometry data into meaningful chemical information, and perhaps the most challenging aspect of building effective glycopeptide analysis software is designing an accurate scoring algorithm for MS/MS data. We provide the glycoproteomics community with two tools to address this challenge. The first tool, a curated set of 100 expert-assigned CID spectra of glycopeptides, contains a diverse set of spectra from a variety of glycan types; the second tool, Glycopeptide Decoy Generator, is a new software application that generates glycopeptide decoys de novo. We developed these tools so that emerging methods of assigning glycopeptides' CID spectra could be rigorously tested. Software developers or those interested in developing skills in expert (manual) analysis can use these tools to facilitate their work. We demonstrate the tools' utility in assessing the quality of one particular glycopeptide software package, GlycoPep Grader, which assigns glycopeptides to CID spectra. We first acquired the set of 100 expert assigned CID spectra; then, we used the Decoy Generator (described herein) to generate 20 decoys per target glycopeptide. The assigned spectra and decoys were used to test the accuracy of GlycoPep Grader's scoring algorithm; new strengths and weaknesses were identified in the algorithm using this approach. Both newly developed tools are freely available. The software can be downloaded at http://glycopro.chem.ku.edu/GPJ.jar.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network
Choi, Sangil; Park, Jong Hyuk
2016-01-01
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM. PMID:27918438
UAVs Task and Motion Planning in the Presence of Obstacles and Prioritized Targets
Gottlieb, Yoav; Shima, Tal
2015-01-01
The intertwined task assignment and motion planning problem of assigning a team of fixed-winged unmanned aerial vehicles to a set of prioritized targets in an environment with obstacles is addressed. It is assumed that the targets’ locations and initial priorities are determined using a network of unattended ground sensors used to detect potential threats at restricted zones. The targets are characterized by a time-varying level of importance, and timing constraints must be fulfilled before a vehicle is allowed to visit a specific target. It is assumed that the vehicles are carrying body-fixed sensors and, thus, are required to approach a designated target while flying straight and level. The fixed-winged aerial vehicles are modeled as Dubins vehicles, i.e., having a constant speed and a minimum turning radius constraint. The investigated integrated problem of task assignment and motion planning is posed in the form of a decision tree, and two search algorithms are proposed: an exhaustive algorithm that improves over run time and provides the minimum cost solution, encoded in the tree, and a greedy algorithm that provides a quick feasible solution. To satisfy the target’s visitation timing constraint, a path elongation motion planning algorithm amidst obstacles is provided. Using simulations, the performance of the algorithms is compared, evaluated and exemplified. PMID:26610522
Absolute Points for Multiple Assignment Problems
ERIC Educational Resources Information Center
Adlakha, V.; Kowalski, K.
2006-01-01
An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Fuell, Kevin; Knaff, John; Lee, Thomas
2012-01-01
What is an RGB Composite Image? (1) Current and future satellite instruments provide remote sensing at a variety of wavelengths. (2) RGB composite imagery assign individual wavelengths or channel differences to the intensities of the red, green, and blue components of a pixel color. (3) Each red, green, and blue color intensity is related to physical properties within the final composite image. (4) Final color assignments are therefore related to the characteristics of image pixels. (5) Products may simplify the interpretation of data from multiple bands by displaying information in a single image. Current Products and Usage: Collaborations between SPoRT, CIRA, and NRL have facilitated the use and evaluation of RGB products at a variety of NWS forecast offices and National Centers. These products are listed in table.
Method and Apparatus for Accurately Calibrating a Spectrometer
NASA Technical Reports Server (NTRS)
Youngquist, Robert C. (Inventor); Simmons, Stephen M. (Inventor)
2013-01-01
A calibration assembly for a spectrometer is provided. The assembly includes a spectrometer having n detector elements, where each detector element is assigned a predetermined wavelength value. A first source emitting first radiation is used to calibrate the spectrometer. A device is placed in the path of the first radiation to split the first radiation into a first beam and a second beam. The assembly is configured so that one of the first and second beams travels a path-difference distance longer than the other of the first and second beams. An output signal is generated by the spectrometer when the first and second beams enter the spectrometer. The assembly includes a controller operable for processing the output signal and adapted to calculate correction factors for the respective predetermined wavelength values assigned to each detector element.
NASA Astrophysics Data System (ADS)
Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán
2016-07-01
We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lepson, J K; Beiersdorfer, P; Behar, E
Atomic structure codes have a difficult time accurately calculating the wavelengths of many-electron ions without the benefit of laboratory measurements. This is especially true for wavelengths of lines in the extreme ultraviolet and soft x-ray regions. We are using the low-energy capability of the Livermore electron beam ion traps to compile a comprehensive catalog of astrophysically relevant emission lines in support of satellite x-ray observations. Our database includes wavelength measurements, relative intensities, and line assignments, and is compared to a full set of calculations using the Hebrew University - Lawrence Livermore Atomic Code (HULLAC). Mean deviation of HULLAC calculations frommore » our measured wavelength values is highest for L-shell transitions of neon-like ions and lowest for lithium-like ions, ranging from a mean deviation of over 0.5 {angstrom} for Si V to 12 m{angstrom} in Ar XVI.« less
TWC and AWG based optical switching structure for OVPN in WDM-PON
NASA Astrophysics Data System (ADS)
Bai, Hui-feng; Chen, Yu-xin; Wang, Qin
2015-03-01
With the rapid development of optical elements with large capacity and high speed, the network architecture is of great importance in determing the performance of wavelength division multiplexing passive optical network (WDM-PON). This paper proposes a switching structure based on the tunable wavelength converter (TWC) and the arrayed-waveguide grating (AWG) for WDM-PON, in order to provide the function of opitcal virtual private network (OVPN). Using the tunable wavelength converter technology, this switch structure is designed and works between the optical line terminal (OLT) and optical network units (ONUs) in the WDM-PON system. Moreover, the wavelength assignment of upstream/downstream can be realized and direct communication between ONUs is also allowed by privite wavelength channel. Simulation results show that the proposed TWC and AWG based switching structure is able to achieve OVPN function and to gain better performances in terms of bite error rate (BER) and time delay.
NASA Astrophysics Data System (ADS)
Hortos, William S.
1997-04-01
The use of artificial neural networks (NNs) to address the channel assignment problem (CAP) for cellular time-division multiple access and code-division multiple access networks has previously been investigated by this author and many others. The investigations to date have been based on a hexagonal cell structure established by omnidirectional antennas at the base stations. No account was taken of the use of spatial isolation enabled by directional antennas to reduce interference between mobiles. Any reduction in interference translates into increased capacity and consequently alters the performance of the NNs. Previous studies have sought to improve the performance of Hopfield- Tank network algorithms and self-organizing feature map algorithms applied primarily to static channel assignment (SCA) for cellular networks that handle uniformly distributed, stationary traffic in each cell for a single type of service. The resulting algorithms minimize energy functions representing interference constraint and ad hoc conditions that promote convergence to optimal solutions. While the structures of the derived neural network algorithms (NNAs) offer the potential advantages of inherent parallelism and adaptability to changing system conditions, this potential has yet to be fulfilled the CAP for emerging mobile networks. The next-generation communication infrastructures must accommodate dynamic operating conditions. Macrocell topologies are being refined to microcells and picocells that can be dynamically sectored by adaptively controlled, directional antennas and programmable transceivers. These networks must support the time-varying demands for personal communication services (PCS) that simultaneously carry voice, data and video and, thus, require new dynamic channel assignment (DCA) algorithms. This paper examines the impact of dynamic cell sectoring and geometric conditioning on NNAs developed for SCA in omnicell networks with stationary traffic to improve the metrics of convergence rate and call blocking. Genetic algorithms (GAs) are also considered in PCS networks as a means to overcome the known weakness of Hopfield NNAs in determining global minima. The resulting GAs for DCA in PCS networks are compared to improved DCA algorithms based on Hopfield NNs for stationary cellular networks. Algorithm performance is compared on the basis of rate of convergence, blocking probability, analytic complexity, and parametric sensitivity to transient traffic demands and channel interference.
Strategic Control Algorithm Development : Volume 1. Summary.
DOT National Transportation Integrated Search
1974-08-01
Strategic control is an air traffic management concept wherein a central control authority determines, and assigns to each participating airplane, a conflict-free, four-dimensional route-time profile. The route-time profile assignments are long term ...
USDA-ARS?s Scientific Manuscript database
Hyperspectral scattering is a promising technique for rapid and noninvasive measurement of multiple quality attributes of apple fruit. A hierarchical evolutionary algorithm (HEA) approach, in combination with subspace decomposition and partial least squares (PLS) regression, was proposed to select o...
Rule-based support system for multiple UMLS semantic type assignments
Geller, James; He, Zhe; Perl, Yehoshua; Morrey, C. Paul; Xu, Julia
2012-01-01
Background When new concepts are inserted into the UMLS, they are assigned one or several semantic types from the UMLS Semantic Network by the UMLS editors. However, not every combination of semantic types is permissible. It was observed that many concepts with rare combinations of semantic types have erroneous semantic type assignments or prohibited combinations of semantic types. The correction of such errors is resource-intensive. Objective We design a computational system to inform UMLS editors as to whether a specific combination of two, three, four, or five semantic types is permissible or prohibited or questionable. Methods We identify a set of inclusion and exclusion instructions in the UMLS Semantic Network documentation and derive corresponding rule-categories as well as rule-categories from the UMLS concept content. We then design an algorithm adviseEditor based on these rule-categories. The algorithm specifies rules for an editor how to proceed when considering a tuple (pair, triple, quadruple, quintuple) of semantic types to be assigned to a concept. Results Eight rule-categories were identified. A Web-based system was developed to implement the adviseEditor algorithm, which returns for an input combination of semantic types whether it is permitted, prohibited or (in a few cases) requires more research. The numbers of semantic type pairs assigned to each rule-category are reported. Interesting examples for each rule-category are illustrated. Cases of semantic type assignments that contradict rules are listed, including recently introduced ones. Conclusion The adviseEditor system implements explicit and implicit knowledge available in the UMLS in a system that informs UMLS editors about the permissibility of a desired combination of semantic types. Using adviseEditor might help accelerate the work of the UMLS editors and prevent erroneous semantic type assignments. PMID:23041716
NASA Astrophysics Data System (ADS)
Hashimoto, Makiko; Nakajima, Teruyuki
2017-06-01
We developed a satellite remote sensing algorithm to retrieve the aerosol optical properties using satellite-received radiances for multiple wavelengths and pixels. Our algorithm utilizes spatial inhomogeneity of surface reflectance to retrieve aerosol properties, and the main target is urban aerosols. This algorithm can simultaneously retrieve aerosol optical thicknesses (AOT) for fine- and coarse-mode aerosols, soot volume fraction in fine-mode aerosols (SF), and surface reflectance over heterogeneous surfaces such as urban areas that are difficult to obtain by conventional pixel-by-pixel methods. We applied this algorithm to radiances measured by the Greenhouse Gases Observing Satellite/Thermal and Near Infrared Sensor for Carbon Observations-Cloud and Aerosol Image (GOSAT/TANSO-CAI) at four wavelengths and were able to retrieve the aerosol parameters in several urban regions and other surface types. A comparison of the retrieved AOTs with those from the Aerosol Robotic Network (AERONET) indicated retrieval accuracy within ±0.077 on average. It was also found that the column-averaged SF and the aerosol single scattering albedo (SSA) underwent seasonal changes as consistent with the ground surface measurements of SSA and black carbon at Beijing, China.
Developing an eco-routing application.
DOT National Transportation Integrated Search
2014-01-01
The study develops eco-routing algorithms and investigates and quantifies the system-wide impacts of implementing an eco-routing system. Two eco-routing algorithms are developed: one based on vehicle sub-populations (ECO-Subpopulation Feedback Assign...
A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion
NASA Astrophysics Data System (ADS)
Yang, Yizhong; Hu, Xionglou; Wu, Nengju; Wang, Pengfei; Xu, Dong; Rong, Shen
2017-09-01
In recent years, 3D movies attract people's attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.
NASA Astrophysics Data System (ADS)
Wei, Qingyang; Dai, Tiantian; Ma, Tianyu; Liu, Yaqiang; Gu, Yu
2016-10-01
An Anger-logic based pixelated PET detector block requires a crystal position map (CPM) to assign the position of each detected event to a most probable crystal index. Accurate assignments are crucial to PET imaging performance. In this paper, we present a novel automatic approach to generate the CPMs for dual-layer offset (DLO) PET detectors using a stratified peak tracking method. In which, the top and bottom layers are distinguished by their intensity difference and the peaks of the top and bottom layers are tracked based on a singular value decomposition (SVD) and mean-shift algorithm in succession. The CPM is created by classifying each pixel to its nearest peak and assigning the pixel with the crystal index of that peak. A Matlab-based graphical user interface program was developed including the automatic algorithm and a manual interaction procedure. The algorithm was tested for three DLO PET detector blocks. Results show that the proposed method exhibits good performance as well as robustness for all the three blocks. Compared to the existing methods, our approach can directly distinguish the layer and crystal indices using the information of intensity and offset grid pattern.
A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm.
Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei; Wang, Hongxun; Dai, Wei
2018-04-08
A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry-Perot (F-P) filter and optical switch. To improve system resolution, the F-P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.
A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm
Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei
2018-01-01
A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry–Perot (F–P) filter and optical switch. To improve system resolution, the F–P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed. PMID:29642507
A comparison of waveform processing algorithms for single-wavelength LiDAR bathymetry
NASA Astrophysics Data System (ADS)
Wang, Chisheng; Li, Qingquan; Liu, Yanxiong; Wu, Guofeng; Liu, Peng; Ding, Xiaoli
2015-03-01
Due to the low-cost and lightweight units, single-wavelength LiDAR bathymetric systems are an ideal option for shallow-water (<12 m) bathymetry. However, one disadvantage of such systems is the lack of near-infrared and Raman channels, which results in difficulties in extracting the water surface. Therefore, the choice of a suitable waveform processing method is extremely important to guarantee the accuracy of the bathymetric retrieval. In this paper, we test six algorithms for single-wavelength bathymetric waveform processing, i.e. peak detection (PD), the average square difference function (ASDF), Gaussian decomposition (GD), quadrilateral fitting (QF), Richardson-Lucy deconvolution (RLD), and Wiener filter deconvolution (WD). To date, most of these algorithms have previously only been applied in topographic LiDAR waveforms captured over land. A simulated dataset and an Optech Aquarius dataset were used to assess the algorithms, with the focus being on their capability of extracting the depth and the bottom response. The influences of a number of water and equipment parameters were also investigated by the use of a Monte Carlo method. The results showed that the RLD method had a superior performance in terms of a high detection rate and low errors in the retrieved depth and magnitude. The attenuation coefficient, noise level, water depth, and bottom reflectance had significant influences on the measurement error of the retrieved depth, while the effects of scan angle and water surface roughness were not so obvious.
-redshifted), Observed Flux, Statistical Error (Based on the optimal extraction algorithm of the IRAF packages were acquired using different instrumental settings for the blue and red parts of the spectrum to avoid extracted for systematics checks of the wavelength calibration. Wavelength and flux calibration were applied
NASA Astrophysics Data System (ADS)
Kong, Wenwen; Liu, Fei; Zhang, Chu; Bao, Yidan; Yu, Jiajia; He, Yong
2014-01-01
Tomatoes are cultivated around the world and gray mold is one of its most prominent and destructive diseases. An early disease detection method can decrease losses caused by plant diseases and prevent the spread of diseases. The activity of peroxidase (POD) is very important indicator of disease stress for plants. The objective of this study is to examine the possibility of fast detection of POD activity in tomato leaves which infected with Botrytis cinerea using hyperspectral imaging data. Five pre-treatment methods were investigated. Genetic algorithm-partial least squares (GA-PLS) was applied to select optimal wavelengths. A new fast learning neural algorithm named extreme learning machine (ELM) was employed as multivariate analytical tool in this study. 21 optimal wavelengths were selected by GA-PLS and used as inputs of three calibration models. The optimal prediction result was achieved by ELM model with selected wavelengths, and the r and RMSEP in validation were 0.8647 and 465.9880 respectively. The results indicated that hyperspectral imaging could be considered as a valuable tool for POD activity prediction. The selected wavelengths could be potential resources for instrument development.
Adaptive illumination source for multispectral vision system applied to material discrimination
NASA Astrophysics Data System (ADS)
Conde, Olga M.; Cobo, Adolfo; Cantero, Paulino; Conde, David; Mirapeix, Jesús; Cubillas, Ana M.; López-Higuera, José M.
2008-04-01
A multispectral system based on a monochrome camera and an adaptive illumination source is presented in this paper. Its preliminary application is focused on material discrimination for food and beverage industries, where monochrome, color and infrared imaging have been successfully applied for this task. This work proposes a different approach, in which the relevant wavelengths for the required discrimination task are selected in advance using a Sequential Forward Floating Selection (SFFS) Algorithm. A light source, based on Light Emitting Diodes (LEDs) at these wavelengths is then used to sequentially illuminate the material under analysis, and the resulting images are captured by a CCD camera with spectral response in the entire range of the selected wavelengths. Finally, the several multispectral planes obtained are processed using a Spectral Angle Mapping (SAM) algorithm, whose output is the desired material classification. Among other advantages, this approach of controlled and specific illumination produces multispectral imaging with a simple monochrome camera, and cold illumination restricted to specific relevant wavelengths, which is desirable for the food and beverage industry. The proposed system has been tested with success for the automatic detection of foreign object in the tobacco processing industry.
NASA Astrophysics Data System (ADS)
Meitav, Omri; Shaul, Oren; Abookasis, David
2018-03-01
A practical algorithm for estimating the wavelength-dependent refractive index (RI) of a turbid sample in the spatial frequency domain with the aid of Kramers-Kronig (KK) relations is presented. In it, phase-shifted sinusoidal patterns (structured illumination) are serially projected at a high spatial frequency onto the sample surface (mouse scalp) at different near-infrared wavelengths while a camera mounted normally to the sample surface captures the reflected diffuse light. In the offline analysis pipeline, recorded images at each wavelength are converted to spatial absorption maps by logarithmic function, and once the absorption coefficient information is obtained, the imaginary part (k) of the complex RI (CRI), based on Maxell's equations, can be calculated. Using the data represented by k, the real part of the CRI (n) is then resolved by KK analysis. The wavelength dependence of n ( λ ) is then fitted separately using four standard dispersion models: Cornu, Cauchy, Conrady, and Sellmeier. In addition, three-dimensional surface-profile distribution of n is provided based on phase profilometry principles and a phase-unwrapping-based phase-derivative-variance algorithm. Experimental results demonstrate the capability of the proposed idea for sample's determination of a biological sample's RI value.
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Takenaka, H.; Higurashi, A.; Nakajima, T.
2017-12-01
Aerosol in the atmosphere is an important constituent for determining the earth's radiation budget, so the accurate aerosol retrievals from satellite is useful. We have developed a satellite remote sensing algorithm to retrieve the aerosol optical properties using multi-wavelength and multi-pixel information of satellite imagers (MWPM). The method simultaneously derives aerosol optical properties, such as aerosol optical thickness (AOT), single scattering albedo (SSA) and aerosol size information, by using spatial difference of wavelegths (multi-wavelength) and surface reflectances (multi-pixel). The method is useful for aerosol retrieval over spatially heterogeneous surface like an urban region. In this algorithm, the inversion method is a combination of an optimal method and smoothing constraint for the state vector. Furthermore, this method has been combined with the direct radiation transfer calculation (RTM) numerically solved by each iteration step of the non-linear inverse problem, without using look up table (LUT) with several constraints. However, it takes too much computation time. To accelerate the calculation time, we replaced the RTM with an accelerated RTM solver learned by neural network-based method, EXAM (Takenaka et al., 2011), using Rster code. And then, the calculation time was shorternd to about one thouthandth. We applyed MWPM combined with EXAM to GOSAT/TANSO-CAI (Cloud and Aerosol Imager). CAI is a supplement sensor of TANSO-FTS, dedicated to measure cloud and aerosol properties. CAI has four bands, 380, 674, 870 and 1600 nm, and observes in 500 meters resolution for band1, band2 and band3, and 1.5 km for band4. Retrieved parameters are aerosol optical properties, such as aerosol optical thickness (AOT) of fine and coarse mode particles at a wavelenth of 500nm, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength by combining a minimum reflectance method and Fukuda et al. (2013). We will show the results and discuss the accuracy of the algorithm for various surface types. Our future work is to extend the algorithm for analysis of GOSAT-2/TANSO-CAI-2 and GCOM/C-SGLI data.
Kiguchi, Masashi; Funane, Tsukasa
2014-11-01
A real-time algorithm for removing scalp-blood signals from functional near-infrared spectroscopy signals is proposed. Scalp and deep signals have different dependencies on the source-detector distance. These signals were separated using this characteristic. The algorithm was validated through an experiment using a dynamic phantom in which shallow and deep absorptions were independently changed. The algorithm for measurement of oxygenated and deoxygenated hemoglobins using two wavelengths was explicitly obtained. This algorithm is potentially useful for real-time systems, e.g., brain-computer interfaces and neuro-feedback systems.
Genetic algorithm driven spectral shaping of supercontinuum radiation in a photonic crystal fiber
NASA Astrophysics Data System (ADS)
Michaeli, Linor; Bahabad, Alon
2018-05-01
We employ a genetic algorithm to control a pulse-shaping system pumping a nonlinear photonic crystal with ultrashort pulses. With this system, we are able to modify the spectrum of the generated supercontinuum (SC) radiation to yield narrow Gaussian-like features around pre-selected wavelengths over the whole SC spectrum.
Global optimization of multimode interference structure for ratiometric wavelength measurement
NASA Astrophysics Data System (ADS)
Wang, Qian; Farrell, Gerald; Hatta, Agus Muhamad
2007-07-01
The multimode interference structure is conventionally used as a splitter/combiner. In this paper, it is optimised as an edge filter for ratiometric wavelength measurement, which can be used in demodulation of fiber Bragg grating sensing. The global optimization algorithm-adaptive simulated annealing is introduced in the design of multimode interference structure including the length and width of the multimode waveguide section, and positions of the input and output waveguides. The designed structure shows a suitable spectral response for wavelength measurement and a good fabrication tolerance.
Optimization of an integrated wavelength monitor device
NASA Astrophysics Data System (ADS)
Wang, Pengfei; Brambilla, Gilberto; Semenova, Yuliya; Wu, Qiang; Farrell, Gerald
2011-05-01
In this paper an edge filter based on multimode interference in an integrated waveguide is optimized for a wavelength monitoring application. This can also be used as a demodulation element in a fibre Bragg grating sensing system. A global optimization algorithm is presented for the optimum design of the multimode interference device, including a range of parameters of the multimode waveguide, such as length, width and position of the input and output waveguides. The designed structure demonstrates the desired spectral response for wavelength measurements. Fabrication tolerance is also analysed numerically for this structure.
The Collection 6 'dark-target' MODIS Aerosol Products
NASA Technical Reports Server (NTRS)
Levy, Robert C.; Mattoo, Shana; Munchak, Leigh A.; Kleidman, Richard G.; Patadia, Falguni; Gupta, Pawan; Remer, Lorraine
2013-01-01
Aerosol retrieval algorithms are applied to Moderate resolution Imaging Spectroradiometer (MODIS) sensors on both Terra and Aqua, creating two streams of decade-plus aerosol information. Products of aerosol optical depth (AOD) and aerosol size are used for many applications, but the primary concern is that these global products are comprehensive and consistent enough for use in climate studies. One of our major customers is the international modeling comparison study known as AEROCOM, which relies on the MODIS data as a benchmark. In order to keep up with the needs of AEROCOM and other MODIS data users, while utilizing new science and tools, we have improved the algorithms and products. The code, and the associated products, will be known as Collection 6 (C6). While not a major overhaul from the previous Collection 5 (C5) version, there are enough changes that there are significant impacts to the products and their interpretation. In its entirety, the C6 algorithm is comprised of three sub-algorithms for retrieving aerosol properties over different surfaces: These include the dark-target DT algorithms to retrieve over (1) ocean and (2) vegetated-dark-soiled land, plus the (3) Deep Blue (DB) algorithm, originally developed to retrieve over desert-arid land. Focusing on the two DT algorithms, we have updated assumptions for central wavelengths, Rayleigh optical depths and gas (H2O, O3, CO2, etc.) absorption corrections, while relaxing the solar zenith angle limit (up to 84) to increase pole-ward coverage. For DT-land, we have updated the cloud mask to allow heavy smoke retrievals, fine-tuned the assignments for aerosol type as function of season location, corrected bugs in the Quality Assurance (QA) logic, and added diagnostic parameters such as topographic altitude. For DT-ocean, improvements include a revised cloud mask for thin-cirrus detection, inclusion of wind speed dependence in the retrieval, updates to logic of QA Confidence flag (QAC) assignment, and additions of important diagnostic information. At the same time as we have introduced algorithm changes, we have also accounted for upstream changes including: new instrument calibration, revised land-sea masking, and changed cloud masking. Upstream changes also impact the coverage and global statistics of the retrieved AOD. Although our responsibility is to the DT code and products, we have also added a product that merges DT and DB product over semi-arid land surfaces to provide a more gap-free dataset, primarily for visualization purposes. Preliminary validation shows that compared to surface-based sunphotometer data, the C6, Level 2 (along swath) DT-products compare at least as well as those from C5. C6 will include new diagnostic information about clouds in the aerosol field, including an aerosol cloud mask at 500 m resolution, and calculations of the distance to the nearest cloud from clear pixels. Finally, we have revised the strategy for aggregating and averaging the Level 2 (swath) data to become Level 3 (gridded) data. All together, the changes to the DT algorithms will result in reduced global AOD (by 0.02) over ocean and increased AOD (by 0.02) over land, along with changes in spatial coverage. Changes in calibration will have more impact to Terras time series, especially over land. This will result in a significant reduction in artificial differences in the Terra and Aqua datasets, and will stabilize the MODIS data as a target for AEROCOM studie
Molecular docking, spectroscopic studies and quantum calculations on nootropic drug.
Uma Maheswari, J; Muthu, S; Sundius, Tom
2014-04-05
A systematic vibrational spectroscopic assignment and analysis of piracetam [(2-oxo-1-pyrrolidineacetamide)] have been carried out using FT-IR and FT-Raman spectral data. The vibrational analysis was aided by an electronic structure calculation based on the hybrid density functional method B3LYP using a 6-311G++(d,p) basis set. Molecular equilibrium geometries, electronic energies, IR and Raman intensities, and harmonic vibrational frequencies have been computed. The assignments are based on the experimental IR and Raman spectra, and a complete assignment of the observed spectra has been proposed. The UV-visible spectrum of the compound was recorded and the electronic properties, such as HOMO and LUMO energies and the maximum absorption wavelengths λmax were determined by the time-dependent DFT (TD-DFT) method. The geometrical parameters, vibrational frequencies and absorption wavelengths were compared with the experimental data. The complete vibrational assignments are performed on the basis of the potential energy distributions (PED) of the vibrational modes in terms of natural internal coordinates. The simulated FT-IR, FT-Raman, and UV spectra of the title compound have been constructed. Molecular docking studies have been carried out in the active site of piracetam by using Argus Lab. In addition, the potential energy surface, HOMO and LUMO energies, first-order hyperpolarizability and the molecular electrostatic potential have been computed. Copyright © 2014 Elsevier B.V. All rights reserved.
Scotti, A.; Butman, B.; Beardsley, R.C.; Alexander, P.S.; Anderson, S.
2005-01-01
The algorithm used to transform velocity signals from beam coordinates to earth coordinates in an acoustic Doppler current profiler (ADCP) relies on the assumption that the currents are uniform over the horizontal distance separating the beams. This condition may be violated by (nonlinear) internal waves, which can have wavelengths as small as 100-200 m. In this case, the standard algorithm combines velocities measured at different phases of a wave and produces horizontal velocities that increasingly differ from true velocities with distance from the ADCP. Observations made in Massachusetts Bay show that currents measured with a bottom-mounted upward-looking ADCP during periods when short-wavelength internal waves are present differ significantly from currents measured by point current meters, except very close to the instrument. These periods are flagged with high error velocities by the standard ADCP algorithm. In this paper measurements from the four spatially diverging beams and the backscatter intensity signal are used to calculate the propagation direction and celerity of the internal waves. Once this information is known, a modified beam-to-earth transformation that combines appropriately lagged beam measurements can be used to obtain current estimates in earth coordinates that compare well with pointwise measurements. ?? 2005 American Meteorological Society.
Firefly as a novel swarm intelligence variable selection method in spectroscopy.
Goodarzi, Mohammad; dos Santos Coelho, Leandro
2014-12-10
A critical step in multivariate calibration is wavelength selection, which is used to build models with better prediction performance when applied to spectral data. Up to now, many feature selection techniques have been developed. Among all different types of feature selection techniques, those based on swarm intelligence optimization methodologies are more interesting since they are usually simulated based on animal and insect life behavior to, e.g., find the shortest path between a food source and their nests. This decision is made by a crowd, leading to a more robust model with less falling in local minima during the optimization cycle. This paper represents a novel feature selection approach to the selection of spectroscopic data, leading to more robust calibration models. The performance of the firefly algorithm, a swarm intelligence paradigm, was evaluated and compared with genetic algorithm and particle swarm optimization. All three techniques were coupled with partial least squares (PLS) and applied to three spectroscopic data sets. They demonstrate improved prediction results in comparison to when only a PLS model was built using all wavelengths. Results show that firefly algorithm as a novel swarm paradigm leads to a lower number of selected wavelengths while the prediction performance of built PLS stays the same. Copyright © 2014. Published by Elsevier B.V.
An operational retrieval algorithm for determining aerosol optical properties in the ultraviolet
NASA Astrophysics Data System (ADS)
Taylor, Thomas E.; L'Ecuyer, Tristan S.; Slusser, James R.; Stephens, Graeme L.; Goering, Christian D.
2008-02-01
This paper describes a number of practical considerations concerning the optimization and operational implementation of an algorithm used to characterize the optical properties of aerosols across part of the ultraviolet (UV) spectrum. The algorithm estimates values of aerosol optical depth (AOD) and aerosol single scattering albedo (SSA) at seven wavelengths in the UV, as well as total column ozone (TOC) and wavelength-independent asymmetry factor (g) using direct and diffuse irradiances measured with a UV multifilter rotating shadowband radiometer (UV-MFRSR). A novel method for cloud screening the irradiance data set is introduced, as well as several improvements and optimizations to the retrieval scheme which yield a more realistic physical model for the inversion and increase the efficiency of the algorithm. Introduction of a wavelength-dependent retrieval error budget generated from rigorous forward model analysis as well as broadened covariances on the a priori values of AOD, SSA and g and tightened covariances of TOC allows sufficient retrieval sensitivity and resolution to obtain unique solutions of aerosol optical properties as demonstrated by synthetic retrievals. Analysis of a cloud screened data set (May 2003) from Panther Junction, Texas, demonstrates that the algorithm produces realistic values of the optical properties that compare favorably with pseudo-independent methods for AOD, TOC and calculated Ångstrom exponents. Retrieval errors of all parameters (except TOC) are shown to be negatively correlated to AOD, while the Shannon information content is positively correlated, indicating that retrieval skill improves with increasing atmospheric turbidity. When implemented operationally on more than thirty instruments in the Ultraviolet Monitoring and Research Program's (UVMRP) network, this retrieval algorithm will provide a comprehensive and internally consistent climatology of ground-based aerosol properties in the UV spectral range that can be used for both validation of satellite measurements as well as regional aerosol and ultraviolet transmission studies.
Zhang, Xuejun; Lei, Jiaxing
2015-01-01
Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840
Administrative Data Algorithms Can Describe Ambulatory Physician Utilization
Shah, Baiju R; Hux, Janet E; Laupacis, Andreas; Zinman, Bernard; Cauch-Dudek, Karen; Booth, Gillian L
2007-01-01
Objective To validate algorithms using administrative data that characterize ambulatory physician care for patients with a chronic disease. Data Sources Seven-hundred and eighty-one people with diabetes were recruited mostly from community pharmacies to complete a written questionnaire about their physician utilization in 2002. These data were linked with administrative databases detailing health service utilization. Study Design An administrative data algorithm was defined that identified whether or not patients received specialist care, and it was tested for agreement with self-report. Other algorithms, which assigned each patient to a primary care and specialist physician, were tested for concordance with self-reported regular providers of care. Principal Findings The algorithm to identify whether participants received specialist care had 80.4 percent agreement with questionnaire responses (κ = 0.59). Compared with self-report, administrative data had a sensitivity of 68.9 percent and specificity 88.3 percent for identifying specialist care. The best administrative data algorithm to assign each participant's regular primary care and specialist providers was concordant with self-report in 82.6 and 78.2 percent of cases, respectively. Conclusions Administrative data algorithms can accurately match self-reported ambulatory physician utilization. PMID:17610448
Two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images.
He, Lifeng; Chao, Yuyan; Suzuki, Kenji
2011-08-01
Whenever one wants to distinguish, recognize, and/or measure objects (connected components) in binary images, labeling is required. This paper presents two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images. One is voxel based and the other is run based. For the voxel-based one, we present an efficient method of deciding the order for checking voxels in the mask. For the run-based one, instead of assigning each foreground voxel, we assign each run a provisional label. Moreover, we use run data to label foreground voxels without scanning any background voxel in the second scan. Experimental results have demonstrated that our voxel-based algorithm is efficient for 3-D binary images with complicated connected components, that our run-based one is efficient for those with simple connected components, and that both are much more efficient than conventional 3-D labeling algorithms.
Dessimoz, Christophe; Boeckmann, Brigitte; Roth, Alexander C J; Gonnet, Gaston H
2006-01-01
Correct orthology assignment is a critical prerequisite of numerous comparative genomics procedures, such as function prediction, construction of phylogenetic species trees and genome rearrangement analysis. We present an algorithm for the detection of non-orthologs that arise by mistake in current orthology classification methods based on genome-specific best hits, such as the COGs database. The algorithm works with pairwise distance estimates, rather than computationally expensive and error-prone tree-building methods. The accuracy of the algorithm is evaluated through verification of the distribution of predicted cases, case-by-case phylogenetic analysis and comparisons with predictions from other projects using independent methods. Our results show that a very significant fraction of the COG groups include non-orthologs: using conservative parameters, the algorithm detects non-orthology in a third of all COG groups. Consequently, sequence analysis sensitive to correct orthology assignments will greatly benefit from these findings.
Relabeling exchange method (REM) for learning in neural networks
NASA Astrophysics Data System (ADS)
Wu, Wen; Mammone, Richard J.
1994-02-01
The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.
Array distribution in data-parallel programs
NASA Technical Reports Server (NTRS)
Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert; Sheffler, Thomas J.
1994-01-01
We consider distribution at compile time of the array data in a distributed-memory implementation of a data-parallel program written in a language like Fortran 90. We allow dynamic redistribution of data and define a heuristic algorithmic framework that chooses distribution parameters to minimize an estimate of program completion time. We represent the program as an alignment-distribution graph. We propose a divide-and-conquer algorithm for distribution that initially assigns a common distribution to each node of the graph and successively refines this assignment, taking computation, realignment, and redistribution costs into account. We explain how to estimate the effect of distribution on computation cost and how to choose a candidate set of distributions. We present the results of an implementation of our algorithms on several test problems.
2015-10-01
ASSIGNED DISTRIBUTION STATEMENT. //Signature// //Signature// GAIL J. BROWN DIANA M. CARLIN, Chief Nanoelectronic ...Materials Branch Nanoelectronic Materials Branch Functional Materials Division Functional Materials Division //Signature// KAREN
Tolić, Nikola; Liu, Yina; Liyu, Andrey; Shen, Yufeng; Tfaily, Malak M; Kujawinski, Elizabeth B; Longnecker, Krista; Kuo, Li-Jung; Robinson, Errol W; Paša-Tolić, Ljiljana; Hess, Nancy J
2017-12-05
Ultrahigh resolution mass spectrometry, such as Fourier transform ion cyclotron resonance mass spectrometry (FT ICR MS), can resolve thousands of molecular ions in complex organic matrices. A Compound Identification Algorithm (CIA) was previously developed for automated elemental formula assignment for natural organic matter (NOM). In this work, we describe software Formularity with a user-friendly interface for CIA function and newly developed search function Isotopic Pattern Algorithm (IPA). While CIA assigns elemental formulas for compounds containing C, H, O, N, S, and P, IPA is capable of assigning formulas for compounds containing other elements. We used halogenated organic compounds (HOC), a chemical class that is ubiquitous in nature as well as anthropogenic systems, as an example to demonstrate the capability of Formularity with IPA. A HOC standard mix was used to evaluate the identification confidence of IPA. Tap water and HOC spike in Suwannee River NOM were used to assess HOC identification in complex environmental samples. Strategies for reconciliation of CIA and IPA assignments were discussed. Software and sample databases with documentation are freely available.
Taboo search algorithm for item assignment in synchronized zone automated order picking system
NASA Astrophysics Data System (ADS)
Wu, Yingying; Wu, Yaohua
2014-07-01
The idle time which is part of the order fulfillment time is decided by the number of items in the zone; therefore the item assignment method affects the picking efficiency. Whereas previous studies only focus on the balance of number of kinds of items between different zones but not the number of items and the idle time in each zone. In this paper, an idle factor is proposed to measure the idle time exactly. The idle factor is proven to obey the same vary trend with the idle time, so the object of this problem can be simplified from minimizing idle time to minimizing idle factor. Based on this, the model of item assignment problem in synchronized zone automated order picking system is built. The model is a form of relaxation of parallel machine scheduling problem which had been proven to be NP-complete. To solve the model, a taboo search algorithm is proposed. The main idea of the algorithm is minimizing the greatest idle factor of zones with the 2-exchange algorithm. Finally, the simulation which applies the data collected from a tobacco distribution center is conducted to evaluate the performance of the algorithm. The result verifies the model and shows the algorithm can do a steady work to reduce idle time and the idle time can be reduced by 45.63% on average. This research proposed an approach to measure the idle time in synchronized zone automated order picking system. The approach can improve the picking efficiency significantly and can be seen as theoretical basis when optimizing the synchronized automated order picking systems.
Analysis of L-band Multi-Channel Sea Clutter
2010-08-01
Some researchers found that the use of a hybrid algorithm of PS and GA could accelerate the convergence for array beamforming designs (Yeo and Lu...to be shown is array failure correction using the PS algorithm . Assume element 5 of a 32 half-wavelength spacing linear array is in failure. The goal... algorithm . The blue one is the 20 dB Chebyshev pattern and the template in red is the goal pattern to achieve. Two corrected beam patterns are
NASA Astrophysics Data System (ADS)
Nakamura, Hirotaka; Suzuki, Hiro; Kani, Jun-Ichi; Iwatsuki, Katsumi
2006-05-01
This paper proposes and demonstrates a reliable wide-area wavelength-division-multiplexing passive optical network (WDM-PON) with a wavelength-shifted protection scheme. This protection scheme utilizes the cyclic property of 2 × N athermal arrayed-waveguide grating and two kinds of wavelength allocations, each of which is assigned for working and protection, respectively. Compared with conventional protection schemes, this scheme does not need a 3-dB optical coupler, thus leading to ensure the large loss budget that is suited for wide-area WDM-PONs. It also features a passive access node and does not have a protection function in the optical network unit (ONU). The feasibility of the proposed scheme is experimentally confirmed by the carrier-distributed WDM-PON with gigabit Ethernet interface (GbE-IF) and 10-GbE-IF, in which the ONU does not employ a light source, and all wavelengths for upstream signals are centralized and distributed from the central office.
Predictive Cache Modeling and Analysis
2011-11-01
metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed...Cache Miss Minimization Technology To efficiently explore combinations and discover nearly-optimal task-assignment algorithms , we extended to our...it was possible to use our algorithmic techniques to decrease network bandwidth consumption by ~25%. In this effort, we adapted these existing
On Study of Air/Space-borne Dual-Wavelength Radar for Estimates of Rain Profiles
NASA Technical Reports Server (NTRS)
Liao, Liang; Meneghini, Robert
2004-01-01
In this study, a framework is discussed to apply air/space-borne dual-wavelength radar for the estimation of characteristic parameters of hydrometeors. The focus of our study is on the Global Precipitation Measurements (GPM) precipitation radar, a dual-wavelength radar that operates at Ku (13.8 GHz) and Ka (35 GHz) bands. As the droplet size distributions (DSD) of rain are expressed as the Gamma function, a procedure is described to derive the median volume diameter (D(sub 0)) and particle number concentration (N(sub T)) of rain. The correspondences of an important quantity of dual-wavelength radar, defined as deferential frequency ratio (DFR), to the D(sub 0) in the melting region are given as a function of the distance from the 0 C isotherm. A self-consistent iterative algorithm that shows a promising to account for rain attenuation of radar and infer the DSD without use of surface reference technique (SRT) is examined by applying it to the apparent radar reflectivity profiles simulated from the DSD model and then comparing the estimates with the model (true) results. For light to moderate rain the self-consistent rain profiling approach converges to unique and correct solutions only if the same shape factors of Gamma functions are used both to generate and retrieve the rain profiles, but does not converges to the true solutions if the DSD form is not chosen correctly. To further examine the dual-wavelength techniques, the self-consistent algorithm, along with forward and backward rain profiling algorithms, is then applied to the measurements taken from the 2nd generation Precipitation Radar (PR-2) built by Jet Propulsion Laboratory. It is found that rain profiles estimated from the forward and backward approaches are not sensitive to shape factor of DSD Gamma distribution, but the self-consistent method is.
Guidance and control of swarms of spacecraft
NASA Astrophysics Data System (ADS)
Morgan, Daniel James
There has been considerable interest in formation flying spacecraft due to their potential to perform certain tasks at a cheaper cost than monolithic spacecraft. Formation flying enables the use of smaller, cheaper spacecraft that distribute the risk of the mission. Recently, the ideas of formation flying have been extended to spacecraft swarms made up of hundreds to thousands of 100-gram-class spacecraft known as femtosatellites. The large number of spacecraft and limited capabilities of each individual spacecraft present a significant challenge in guidance, navigation, and control. This dissertation deals with the guidance and control algorithms required to enable the flight of spacecraft swarms. The algorithms developed in this dissertation are focused on achieving two main goals: swarm keeping and swarm reconfiguration. The objectives of swarm keeping are to maintain bounded relative distances between spacecraft, prevent collisions between spacecraft, and minimize the propellant used by each spacecraft. Swarm reconfiguration requires the transfer of the swarm to a specific shape. Like with swarm keeping, minimizing the propellant used and preventing collisions are the main objectives. Additionally, the algorithms required for swarm keeping and swarm reconfiguration should be decentralized with respect to communication and computation so that they can be implemented on femtosats, which have limited hardware capabilities. The algorithms developed in this dissertation are concerned with swarms located in low Earth orbit. In these orbits, Earth oblateness and atmospheric drag have a significant effect on the relative motion of the swarm. The complicated dynamic environment of low Earth orbits further complicates the swarm-keeping and swarm-reconfiguration problems. To better develop and test these algorithms, a nonlinear, relative dynamic model with J2 and drag perturbations is developed. This model is used throughout this dissertation to validate the algorithms using computer simulations. The swarm-keeping problem can be solved by placing the spacecraft on J2-invariant relative orbits, which prevent collisions and minimize the drift of the swarm over hundreds of orbits using a single burn. These orbits are achieved by energy matching the spacecraft to the reference orbit. Additionally, these conditions can be repeatedly applied to minimize the drift of the swarm when atmospheric drag has a large effect (orbits with an altitude under 500 km). The swarm reconfiguration is achieved using two steps: trajectory optimization and assignment. The trajectory optimization problem can be written as a nonlinear, optimal control problem. This optimal control problem is discretized, decoupled, and convexified so that the individual femtosats can efficiently solve the optimization. Sequential convex programming is used to generate the control sequences and trajectories required to safely and efficiently transfer a spacecraft from one position to another. The sequence of trajectories is shown to converge to a Karush-Kuhn-Tucker point of the nonconvex problem. In the case where many of the spacecraft are interchangeable, a variable-swarm, distributed auction algorithm is used to determine the assignment of spacecraft to target positions. This auction algorithm requires only local communication and all of the bidding parameters are stored locally. The assignment generated using this auction algorithm is shown to be near optimal and to converge in a finite number of bids. Additionally, the bidding process is used to modify the number of targets used in the assignment so that the reconfiguration can be achieved even when there is a disconnected communication network or a significant loss of agents. Once the assignment is achieved, the trajectory optimization can be run using the terminal positions determined by the auction algorithm. To implement these algorithms in real time a model predictive control formulation is used. Model predictive control uses a finite horizon to apply the most up-to-date control sequence while simultaneously calculating a new assignment and trajectory based on updated state information. Using a finite horizon allows collisions to only be considered between spacecraft that are near each other at the current time. This relaxes the all-to-all communication assumption so that only neighboring agents need to communicate. Experimental validation is done using the formation flying testbed. The swarm-reconfiguration algorithms are tested using multiple quadrotors. Experiments have been performed using sequential convex programming for offline trajectory planning, model predictive control and sequential convex programming for real-time trajectory generation, and the variable-swarm, distributed auction algorithm for optimal assignment. These experiments show that the swarm-reconfiguration algorithms can be implemented in real time using actual hardware. In general, this dissertation presents guidance and control algorithms that maintain and reconfigure swarms of spacecraft while maintaining the shape of the swarm, preventing collisions between the spacecraft, and minimizing the amount of propellant used.
Oceanographic applications of laser technology
NASA Technical Reports Server (NTRS)
Hoge, F. E.
1988-01-01
Oceanographic activities with the Airborne Oceanographic Lidar (AOL) for the past several years have primarily been focussed on using active (laser induced pigment fluorescence) and concurrent passive ocean color spectra to improve existing ocean color algorithms for estimating primary production in the world's oceans. The most significant results were the development of a technique for selecting optimal passive wavelengths for recovering phytoplankton photopigment concentration and the application of this technique, termed active-passive correlation spectroscopy (APCS), to various forms of passive ocean color algorithms. Included in this activity is use of airborne laser and passive ocean color for development of advanced satellite ocean color sensors. Promising on-wavelength subsurface scattering layer measurements were recently obtained. A partial summary of these results are shown.
Modeling and image reconstruction in spectrally resolved bioluminescence tomography
NASA Astrophysics Data System (ADS)
Dehghani, Hamid; Pogue, Brian W.; Davis, Scott C.; Patterson, Michael S.
2007-02-01
Recent interest in modeling and reconstruction algorithms for Bioluminescence Tomography (BLT) has increased and led to the general consensus that non-spectrally resolved intensity-based BLT results in a non-unique problem. However, the light emitted from, for example firefly Luciferase, is widely distributed over the band of wavelengths from 500 nm to 650 nm and above, with the dominant fraction emitted from tissue being above 550 nm. This paper demonstrates the development of an algorithm used for multi-wavelength 3D spectrally resolved BLT image reconstruction in a mouse model. It is shown that using a single view data, bioluminescence sources of up to 15 mm deep can be successfully recovered given correct information about the underlying tissue absorption and scatter.
Real-Time Visualization of Tissue Ischemia
NASA Technical Reports Server (NTRS)
Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)
2000-01-01
A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weigend, Florian, E-mail: florian.weigend@kit.edu
2014-10-07
Energy surfaces of metal clusters usually show a large variety of local minima. For homo-metallic species the energetically lowest can be found reliably with genetic algorithms, in combination with density functional theory without system-specific parameters. For mixed-metallic clusters this is much more difficult, as for a given arrangement of nuclei one has to find additionally the best of many possibilities of assigning different metal types to the individual positions. In the framework of electronic structure methods this second issue is treatable at comparably low cost at least for elements with similar atomic number by means of first-order perturbation theory, asmore » shown previously [F. Weigend, C. Schrodt, and R. Ahlrichs, J. Chem. Phys. 121, 10380 (2004)]. In the present contribution the extension of a genetic algorithm with the re-assignment of atom types to atom sites is proposed and tested for the search of the global minima of PtHf{sub 12} and [LaPb{sub 7}Bi{sub 7}]{sup 4−}. For both cases the (putative) global minimum is reliably found with the extended technique, which is not the case for the “pure” genetic algorithm.« less
NASA Astrophysics Data System (ADS)
Essen, Helmut; Brehm, Thorsten; Boehmsdorff, Stephan
2007-10-01
Interferometric Synthetic Aperture Radar has the capability to provide the user with the 3-D-Information of land surfaces. To gather data with high height estimation accuracy it is necessary to use a wide interferometric baseline or a high radar frequency. However the problem of resolving the phase ambiguity at smaller wavelengths is more critical than at longer wavelengths, as the unambiguous height interval is inversely proportional to the radar wavelength. To solve this shortcoming, a multiple baseline approach can be used with a number of neighbouring horns and an increasing baselength going from narrow to wide. The narrowest, corresponding to adjacent horns, is then assumed to be unambiguous in phase. This initial interferogram is used as a starting point for the algorithm, which in the next step, unwraps the interferogram with the next wider baseline using the coarse height information to solve the phase ambiguities. This process is repeated consecutively until the interferogram with highest precision is unwrapped. On the expense of this multi-channel-approach the algorithm is simple and robust, and even the amount of processing time is reduced considerably, compared to traditional methods. The multiple baseline approach is especially adequate for millimeterwave radars as antenna horns with relatively small aperture can be used, while a sufficient 3-dB beamwidth is maintained. The paper describes the multiple baseline algorithm and shows the results of tests on real data and a synthetic area. Possibilities and limitations of this approach are discussed. Examples of digital elevation maps derived from measured data at millimeterwaves are shown.
Quantum annealing for combinatorial clustering
NASA Astrophysics Data System (ADS)
Kumar, Vaibhaw; Bass, Gideon; Tomlin, Casey; Dulny, Joseph
2018-02-01
Clustering is a powerful machine learning technique that groups "similar" data points based on their characteristics. Many clustering algorithms work by approximating the minimization of an objective function, namely the sum of within-the-cluster distances between points. The straightforward approach involves examining all the possible assignments of points to each of the clusters. This approach guarantees the solution will be a global minimum; however, the number of possible assignments scales quickly with the number of data points and becomes computationally intractable even for very small datasets. In order to circumvent this issue, cost function minima are found using popular local search-based heuristic approaches such as k-means and hierarchical clustering. Due to their greedy nature, such techniques do not guarantee that a global minimum will be found and can lead to sub-optimal clustering assignments. Other classes of global search-based techniques, such as simulated annealing, tabu search, and genetic algorithms, may offer better quality results but can be too time-consuming to implement. In this work, we describe how quantum annealing can be used to carry out clustering. We map the clustering objective to a quadratic binary optimization problem and discuss two clustering algorithms which are then implemented on commercially available quantum annealing hardware, as well as on a purely classical solver "qbsolv." The first algorithm assigns N data points to K clusters, and the second one can be used to perform binary clustering in a hierarchical manner. We present our results in the form of benchmarks against well-known k-means clustering and discuss the advantages and disadvantages of the proposed techniques.
An analysis of spectral envelope-reduction via quadratic assignment problems
NASA Technical Reports Server (NTRS)
George, Alan; Pothen, Alex
1994-01-01
A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.
Ant colony optimization for solving university facility layout problem
NASA Astrophysics Data System (ADS)
Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin
2013-04-01
Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).
Optimal lightpath placement on a metropolitan-area network linked with optical CDMA local nets
NASA Astrophysics Data System (ADS)
Wang, Yih-Fuh; Huang, Jen-Fa
2008-01-01
A flexible optical metropolitan-area network (OMAN) [J.F. Huang, Y.F. Wang, C.Y. Yeh, Optimal configuration of OCDMA-based MAN with multimedia services, in: 23rd Biennial Symposium on Communications, Queen's University, Kingston, Canada, May 29-June 2, 2006, pp. 144-148] structured with OCDMA linkage is proposed to support multimedia services with multi-rate or various qualities of service. To prioritize transmissions in OCDMA, the orthogonal variable spreading factor (OVSF) codes widely used in wireless CDMA are adopted. In addition, for feasible multiplexing, unipolar OCDMA modulation [L. Nguyen, B. Aazhang, J.F. Young, All-optical CDMA with bipolar codes, IEEE Electron. Lett. 31 (6) (1995) 469-470] is used to generate the code selector of multi-rate OMAN, and a flexible fiber-grating-based system is used for the equipment on OCDMA-OVSF code. These enable an OMAN to assign suitable OVSF codes when creating different-rate lightpaths. How to optimally configure a multi-rate OMAN is a challenge because of displaced lightpaths. In this paper, a genetically modified genetic algorithm (GMGA) [L.R. Chen, Flexible fiber Bragg grating encoder/decoder for hybrid wavelength-time optical CDMA, IEEE Photon. Technol. Lett. 13 (11) (2001) 1233-1235] is used to preplan lightpaths in order to optimally configure an OMAN. To evaluate the performance of the GMGA, we compared it with different preplanning optimization algorithms. Simulation results revealed that the GMGA very efficiently solved the problem.
On Channel-Discontinuity-Constraint Routing in Wireless Networks☆
Sankararaman, Swaminathan; Efrat, Alon; Ramasubramanian, Srinivasan; Agarwal, Pankaj K.
2011-01-01
Multi-channel wireless networks are increasingly deployed as infrastructure networks, e.g. in metro areas. Network nodes frequently employ directional antennas to improve spatial throughput. In such networks, between two nodes, it is of interest to compute a path with a channel assignment for the links such that the path and link bandwidths are the same. This is achieved when any two consecutive links are assigned different channels, termed as “Channel-Discontinuity-Constraint” (CDC). CDC-paths are also useful in TDMA systems, where, preferably, consecutive links are assigned different time-slots. In the first part of this paper, we develop a t-spanner for CDC-paths using spatial properties; a sub-network containing O(n/θ) links, for any θ > 0, such that CDC-paths increase in cost by at most a factor t = (1−2 sin (θ/2))−2. We propose a novel distributed algorithm to compute the spanner using an expected number of O(n log n) fixed-size messages. In the second part, we present a distributed algorithm to find minimum-cost CDC-paths between two nodes using O(n2) fixed-size messages, by developing an extension of Edmonds’ algorithm for minimum-cost perfect matching. In a centralized implementation, our algorithm runs in O(n2) time improving the previous best algorithm which requires O(n3) running time. Moreover, this running time improves to O(n/θ) when used in conjunction with the spanner developed. PMID:24443646
Banerjee, Saswatee; Hoshino, Tetsuya; Cole, James B
2008-08-01
We introduce a new implementation of the finite-difference time-domain (FDTD) algorithm with recursive convolution (RC) for first-order Drude metals. We implemented RC for both Maxwell's equations for light polarized in the plane of incidence (TM mode) and the wave equation for light polarized normal to the plane of incidence (TE mode). We computed the Drude parameters at each wavelength using the measured value of the dielectric constant as a function of the spatial and temporal discretization to ensure both the accuracy of the material model and algorithm stability. For the TE mode, where Maxwell's equations reduce to the wave equation (even in a region of nonuniform permittivity) we introduced a wave equation formulation of RC-FDTD. This greatly reduces the computational cost. We used our methods to compute the diffraction characteristics of metallic gratings in the visible wavelength band and compared our results with frequency-domain calculations.
The Least-Squares Calibration on the Micro-Arcsecond Metrology Test Bed
NASA Technical Reports Server (NTRS)
Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.
2006-01-01
The Space Interferometry Mission (S1M) will measure optical path differences (OPDs) with an accuracy of tens of picometers, requiring precise calibration of the instrument. In this article, we present a calibration approach based on fitting star light interference fringes in the interferometer using a least-squares algorithm. The algorithm is first analyzed for the case of a monochromatic light source with a monochromatic fringe model. Using fringe data measured on the Micro-Arcsecond Metrology (MAM) testbed with a laser source, the error in the determination of the wavelength is shown to be less than 10pm. By using a quasi-monochromatic fringe model, the algorithm can be extended to the case of a white light source with a narrow detection bandwidth. In SIM, because of the finite bandwidth of each CCD pixel, the effect of the fringe envelope can not be neglected, especially for the larger optical path difference range favored for the wavelength calibration.
NASA Astrophysics Data System (ADS)
Begum, A. Yasmine; Gireesh, N.
2018-04-01
In superheater, steam temperature is controlled in a cascade control loop. The cascade control loop consists of PI and PID controllers. To improve the superheater steam temperature control the controller's gains in a cascade control loop has to be tuned efficiently. The mathematical model of the superheater is derived by sets of nonlinear partial differential equations. The tuning methods taken for study here are designed for delay plus first order transfer function model. Hence from the dynamical model of the superheater, a FOPTD model is derived using frequency response method. Then by using Chien-Hrones-Reswick Tuning Algorithm and Gain-Phase Assignment Algorithm optimum controller gains has been found out based on the least value of integral time weighted absolute error.
Fan, Shu-xiang; Huang, Wen-qian; Li, Jiang-bo; Zhao, Chun-jiang; Zhang, Bao-hua
2014-08-01
To improve the precision and robustness of the NIR model of the soluble solid content (SSC) on pear. The total number of 160 pears was for the calibration (n=120) and prediction (n=40). Different spectral pretreatment methods, including standard normal variate (SNV) and multiplicative scatter correction (MSC) were used before further analysis. A combination of genetic algorithm (GA) and successive projections algorithm (SPA) was proposed to select most effective wavelengths after uninformative variable elimination (UVE) from original spectra, SNV pretreated spectra and MSC pretreated spectra respectively. The selected variables were used as the inputs of least squares-support vector machine (LS-SVM) model to build models for de- termining the SSC of pear. The results indicated that LS-SVM model built using SNVE-UVE-GA-SPA on 30 characteristic wavelengths selected from full-spectrum which had 3112 wavelengths achieved the optimal performance. The correlation coefficient (Rp) and root mean square error of prediction (RMSEP) for prediction sets were 0.956, 0.271 for SSC. The model is reliable and the predicted result is effective. The method can meet the requirement of quick measuring SSC of pear and might be important for the development of portable instruments and online monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hubbard, L; Ziemer, B; Malkasian, S
Purpose: To evaluate the accuracy of a patient-specific coronary perfusion territory assignment algorithm that uses CT angiography (CTA) and a minimum-cost-path approach to assign coronary perfusion territories on a voxel-by-voxel basis for determination of myocardial mass at risk. Methods: Intravenous (IV) contrast (370 mg/mL iodine, 25 mL, 7 mL/s) was injected centrally into five swine (35–45 kg) and CTA was performed using a 320-slice CT scanner at 100 kVp and 200 mA. Additionally, a 4F catheter was advanced into the left anterior descending (LAD), left circumflex (LCX), and right coronary artery (RCA) and contrast (30 mg/mL iodine, 10 mL, 1.5more » mL/s) was directly injected into each coronary artery for isolation of reference coronary perfusion territories. Semiautomatic myocardial segmentation of the CTA data was then performed and the centerlines of the LAD, LCX, and RCA were digitally extracted through image processing. Individual coronary perfusion territories were then assigned using a minimum-cost-path approach, and were quantitatively compared to the reference coronary perfusion territories. Results: The results of the coronary perfusion territory assignment algorithm were in good agreement with the reference coronary perfusion territories. The average volumetric assignment error from mitral orifice to apex was 5.5 ± 1.1%, corresponding to 2.1 ± 0.7 grams of myocardial mass misassigned for each coronary perfusion territory. Conclusion: The results indicate that accurate coronary perfusion territory assignment is possible on a voxel-by-voxel basis using CTA data and an assignment algorithm based on a minimum-cost-path approach. Thus, the technique can potentially be used to accurately determine patient-specific myocardial mass at risk distal to a coronary stenosis, improving coronary lesion assessment and treatment. Conflict of Interest (only if applicable): Grant funding from Toshiba America Medical Systems.« less
NASA Astrophysics Data System (ADS)
Haffner, D. P.; McPeters, R. D.; Bhartia, P. K.; Labow, G. J.
2015-12-01
The TOMS V9 total ozone algorithm will be applied to the OMPS Nadir Mapper instrument to supersede the exisiting V8.6 data product in operational processing and re-processing for public release. Becuase the quality of the V8.6 data is already quite high, enchancements in V9 are mainly with information provided by the retrieval and simplifcations to the algorithm. The design of the V9 algorithm has been influenced by improvements both in our knowledge of atmospheric effects, such as those of clouds made possible by studies with OMI, and also limitations in the V8 algorithms applied to both OMI and OMPS. But the namesake instruments of the TOMS algorithm are substantially more limited in their spectral and noise characterisitics, and a requirement of our algorithm is to also apply the algorithm to these discrete band spectrometers which date back to 1978. To achieve continuity for all these instruments, the TOMS V9 algorithm continues to use radiances in discrete bands, but now uses Rodgers optimal estimation to retrieve a coarse profile and provide uncertainties for each retrieval. The algorithm remains capable of achieving high accuracy results with a small number of discrete wavelengths, and in extreme cases, such as unusual profile shapes and high solar zenith angles, the quality of the retrievals is improved. Despite the intended design to use limited wavlenegths, the algorithm can also utilitze additional wavelengths from hyperspectral sensors like OMPS to augment the retreival's error detection and information content; for example SO2 detection and correction of Ring effect on atmospheric radiances. We discuss these and other aspects of the V9 algorithm as it will be applied to OMPS, and will mention potential improvements which aim to take advantage of a synergy with OMPS Limb Profiler and Nadir Mapper to further improve the quality of total ozone from the OMPS instrument.
A SAT Based Effective Algorithm for the Directed Hamiltonian Cycle Problem
NASA Astrophysics Data System (ADS)
Jäger, Gerold; Zhang, Weixiong
The Hamiltonian cycle problem (HCP) is an important combinatorial problem with applications in many areas. While thorough theoretical and experimental analyses have been made on the HCP in undirected graphs, little is known for the HCP in directed graphs (DHCP). The contribution of this work is an effective algorithm for the DHCP. Our algorithm explores and exploits the close relationship between the DHCP and the Assignment Problem (AP) and utilizes a technique based on Boolean satisfiability (SAT). By combining effective algorithms for the AP and SAT, our algorithm significantly outperforms previous exact DHCP algorithms including an algorithm based on the award-winning Concorde TSP algorithm.
A learning approach to the bandwidth multicolouring problem
NASA Astrophysics Data System (ADS)
Akbari Torkestani, Javad
2016-05-01
In this article, a generalisation of the vertex colouring problem known as bandwidth multicolouring problem (BMCP), in which a set of colours is assigned to each vertex such that the difference between the colours, assigned to each vertex and its neighbours, is by no means less than a predefined threshold, is considered. It is shown that the proposed method can be applied to solve the bandwidth colouring problem (BCP) as well. BMCP is known to be NP-hard in graph theory, and so a large number of approximation solutions, as well as exact algorithms, have been proposed to solve it. In this article, two learning automata-based approximation algorithms are proposed for estimating a near-optimal solution to the BMCP. We show, for the first proposed algorithm, that by choosing a proper learning rate, the algorithm finds the optimal solution with a probability close enough to unity. Moreover, we compute the worst-case time complexity of the first algorithm for finding a 1/(1-ɛ) optimal solution to the given problem. The main advantage of this method is that a trade-off between the running time of algorithm and the colour set size (colouring optimality) can be made, by a proper choice of the learning rate also. Finally, it is shown that the running time of the proposed algorithm is independent of the graph size, and so it is a scalable algorithm for large graphs. The second proposed algorithm is compared with some well-known colouring algorithms and the results show the efficiency of the proposed algorithm in terms of the colour set size and running time of algorithm.
Braithwaite, Susan S.; Godara, Hemant; Song, Julie; Cairns, Bruce A.; Jones, Samuel W.; Umpierrez, Guillermo E.
2009-01-01
Background Algorithms for intravenous insulin infusion may assign the infusion rate (IR) by a two-step process. First, the previous insulin infusion rate (IRprevious) and the rate of change of blood glucose (BG) from the previous iteration of the algorithm are used to estimate the maintenance rate (MR) of insulin infusion. Second, the insulin IR for the next iteration (IRnext) is assigned to be commensurate with the MR and the distance of the current blood glucose (BGcurrent) from target. With use of a specific set of algorithm parameter values, a family of iso-MR curves is created, each giving IR as a function of MR and BG. Method To test the feasibility of estimating MR from the IRprevious and the previous rate of change of BG, historical hyperglycemic data points were used to compute the “maintenance rate cross step next estimate” (MRcsne). Historical cases had been treated with intravenous insulin infusion using a tabular protocol that estimated MR according to column-change rules. The mean IR on historical stable intervals (MRtrue), an estimate of the biologic value of MR, was compared to MRcsne during the hyperglycemic iteration immediately preceding the stable interval. Hypothetically calculated MRcsne-dependent IRnext was compared to IRnext assigned historically. An expanded theory of an algorithm is developed mathematically. Practical recommendations for computerization are proposed. Results The MRtrue determined on each of 30 stable intervals and the MRcsne during the immediately preceding hyperglycemic iteration differed, having medians with interquartile ranges 2.7 (1.2–3.7) and 3.2 (1.5–4.6) units/h, respectively. However, these estimates of MR were strongly correlated (R2 = 0.88). During hyperglycemia at 941 time points the IRnext assigned historically and the hypothetically calculated MRcsne-dependent IRnext differed, having medians with interquartile ranges 4.0 (3.0–6.0) and 4.6 (3.0–6.8) units/h, respectively, but these paired values again were correlated (R2 = 0.87). This article describes a programmable algorithm for intravenous insulin infusion. The fundamental equation of the algorithm gives the relationship among IR; the biologic parameter MR; and two variables expressing an instantaneous rate of change of BG, one of which must be zero at any given point in time and the other positive, negative, or zero, namely the rate of change of BG from below target (rate of ascent) and the rate of change of BG from above target (rate of descent). In addition to user-definable parameters, three special algorithm parameters discoverable in nature are described: the maximum rate of the spontaneous ascent of blood glucose during nonhypoglycemia, the glucose per daily dose of insulin exogenously mediated, and the MR at given patient time points. User-assignable parameters will facilitate adaptation to different patient populations. Conclusions An algorithm is described that estimates MR prior to the attainment of euglycemia and computes MR-dependent values for IRnext. Design features address glycemic variability, promote safety with respect to hypoglycemia, and define a method for specifying glycemic targets that are allowed to differ according to patient condition. PMID:20144334
DOT National Transportation Integrated Search
2011-01-01
Travel demand modeling plays a key role in the transportation system planning and evaluation process. The four-step sequential travel demand model is the most widely used technique in practice. Traffic assignment is the key step in the conventional f...
Generalised Assignment Matrix Methodology in Linear Programming
ERIC Educational Resources Information Center
Jerome, Lawrence
2012-01-01
Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…
NASA Astrophysics Data System (ADS)
Anastasopoulos, Dimitrios; Moretti, Patrizia; Geernaert, Thomas; De Pauw, Ben; Nawrot, Urszula; De Roeck, Guido; Berghmans, Francis; Reynders, Edwin
2017-03-01
The presence of damage in a civil structure alters its stiffness and consequently its modal characteristics. The identification of these changes can provide engineers with useful information about the condition of a structure and constitutes the basic principle of the vibration-based structural health monitoring. While eigenfrequencies and mode shapes are the most commonly monitored modal characteristics, their sensitivity to structural damage may be low relative to their sensitivity to environmental influences. Modal strains or curvatures could offer an attractive alternative but current measurement techniques encounter difficulties in capturing the very small strain (sub-microstrain) levels occurring during ambient, or operational excitation, with sufficient accuracy. This paper investigates the ability to obtain sub-microstrain accuracy with standard fiber-optic Bragg gratings using a novel optical signal processing algorithm that identifies the wavelength shift with high accuracy and precision. The novel technique is validated in an extensive experimental modal analysis test on a steel I-beam which is instrumented with FBG sensors at its top and bottom flange. The raw wavelength FBG data are processed into strain values using both a novel correlation-based processing technique and a conventional peak tracking technique. Subsequently, the strain time series are used for identifying the beam's modal characteristics. Finally, the accuracy of both algorithms in identification of modal characteristics is extensively investigated.
10 Years of Asian Dust Storm Observations from SeaWiFS: Source, Pathway, and Interannual Variability
NASA Technical Reports Server (NTRS)
Hsu, N. Christina; Tsay, S.-C.; King, M.D.; Jeong, M.-J.
2008-01-01
In this paper, we will demonstrate the capability of a new satellite algorithm to retrieve aerosol optical thickness and single scattering albedo over bright-reflecting surfaces such as urban areas and deserts. Such retrievals have been difficult to perform using previously available algorithms that use wavelengths from the mid-visible to the near IR because they have trouble separating the aerosol signal from the contribution due to the bright surface reflectance. The new algorithm, called Deep Blue, utilizes blue-wavelength measurements from instruments such as SeaWiFS and MODIS to infer the properties of aerosols, since the surface reflectance over land in the blue part of the spectrum is much lower than for longer wavelength channels. We have validated the satellite retrieved aerosol optical thickness with data from AERONET sunphotometers over desert and semi-desert regions. The comparisons show reasonable agreements between these two. These new satellite products will allow scientists to determine quantitatively the aerosol properties near sources using high spatial resolution measurements from SeaWiFS and MODIS-like instruments. The multiyear satellite measurements (1998 - 2007) from SeaWiFS will be utilized to investigate the interannual variability of source, pathway, and dust loading associated with these dust outbreaks in East Asia. The monthly averaged aerosol optical thickness during the springtime from SeaWiFS will also be compared with the MODIS Deep Blue products.
NASA Astrophysics Data System (ADS)
Shang, Xiang; Xia, Haiyun; Dou, Xiankang; Shangguan, Mingjia; Li, Manyi; Wang, Chong
2018-07-01
An eye-safe 1 . 5 μm visibility lidar is presented in this work considering in situ particle size distribution, which can be deployed in crowded places like airports. In such a case, the measured extinction coefficient at 1 . 5 μm should be converted to that at 0 . 55 μm for visibility retrieval. Although several models have been established since 1962, the accurate wavelength conversion remains a challenge. An adaptive inversion algorithm for 1 . 5 μm visibility lidar is proposed and demonstrated by using the in situ Angstrom wavelength exponent, which is derived from an aerosol spectrometer. The impact of the particle size distribution of atmospheric aerosols and the Rayleigh backscattering of atmospheric molecules are taken into account. Using the 1 . 5 μm visibility lidar, the visibility with a temporal resolution of 5 min is detected over 48 h in Hefei (31 . 83∘ N, 117 . 25∘ E). The average visibility error between the new method and a visibility sensor (Vaisala, PWD52) is 5.2% with the R-square value of 0.96, while the relative error between another reference visibility lidar at 532 nm and the visibility sensor is 6.7% with the R-square value of 0.91. All results agree with each other well, demonstrating the accuracy and stability of the algorithm.
Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review
Zuo, Chao; Huang, Lei; Zhang, Minliang; ...
2016-05-06
In fringe projection pro lometry (FPP), temporal phase unwrapping is an essential procedure to recover an unambiguous absolute phase even in the presence of large discontinuities or spatially isolated surfaces. So far, there are typically three groups of temporal phase unwrapping algorithms proposed in the literature: multi-frequency (hierarchical) approach, multi-wavelength (heterodyne) approach, and number-theoretical approach. In this paper, the three methods are investigated and compared in details by analytical, numerical, and experimental means. The basic principles and recent developments of the three kind of algorithms are firstly reviewed. Then, the reliability of different phase unwrapping algorithms is compared based onmore » a rigorous stochastic noise model. Moreover, this noise model is used to predict the optimum fringe period for each unwrapping approach, which is a key factor governing the phase measurement accuracy in FPP. Simulations and experimental results verified the correctness and validity of the proposed noise model as well as the prediction scheme. The results show that the multi-frequency temporal phase unwrapping provides the best unwrapping reliability, while the multi-wavelength approach is the most susceptible to noise-induced unwrapping errors.« less
Algorithm Analysis of the DSM-5 Alcohol Withdrawal Symptom.
Martin, Christopher S; Vergés, Alvaro; Langenbucher, James W; Littlefield, Andrew; Chung, Tammy; Clark, Duncan B; Sher, Kenneth J
2018-06-01
Alcohol withdrawal (AW) is an important clinical and diagnostic feature of alcohol dependence. AW has been found to predict a worsened course of illness in clinical samples, but in some community studies, AW endorsement rates are strikingly high, suggesting false-positive symptom assignments. Little research has examined the validity of the DSM-5 algorithm for AW, which requires either the presence of at least 2 of 8 subcriteria (i.e., autonomic hyperactivity, tremulousness, insomnia, nausea, hallucinations, psychomotor agitation, anxiety, and grand mal seizures), or, the use of alcohol to avoid or relieve these symptoms. We used item and algorithm analyses of data from waves 1 and 2 of the National Epidemiologic Survey on Alcohol and Related Conditions (current drinkers, n = 26,946 at wave 1) to study the validity of DSM-5 AW as operationalized by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-DSM-IV (AUDADIS-IV). A substantial proportion of individuals given the AW symptom reported only modest to moderate levels of alcohol use and alcohol problems. Alternative AW algorithms were superior to DSM-5 in terms of levels of alcohol use and alcohol problem severity among those with AW, group difference effect sizes, and predictive validity at a 3-year follow-up. The superior alternative algorithms included those that excluded the nausea subcriterion; required withdrawal-related distress or impairment; increased the AW subcriteria threshold from 2 to 3 items; and required tremulousness for AW symptom assignment. The results indicate that the DSM-5 definition of AW, as assessed by the AUDADIS-IV, has low specificity. This shortcoming can be addressed by making the algorithm for symptom assignment more stringent. Copyright © 2018 by the Research Society on Alcoholism.
Path Planning Algorithms for the Adaptive Sensor Fleet
NASA Technical Reports Server (NTRS)
Stoneking, Eric; Hosler, Jeff
2005-01-01
The Adaptive Sensor Fleet (ASF) is a general purpose fleet management and planning system being developed by NASA in coordination with NOAA. The current mission of ASF is to provide the capability for autonomous cooperative survey and sampling of dynamic oceanographic phenomena such as current systems and algae blooms. Each ASF vessel is a software model that represents a real world platform that carries a variety of sensors. The OASIS platform will provide the first physical vessel, outfitted with the systems and payloads necessary to execute the oceanographic observations described in this paper. The ASF architecture is being designed for extensibility to accommodate heterogenous fleet elements, and is not limited to using the OASIS platform to acquire data. This paper describes the path planning algorithms developed for the acquisition phase of a typical ASF task. Given a polygonal target region to be surveyed, the region is subdivided according to the number of vessels in the fleet. The subdivision algorithm seeks a solution in which all subregions have equal area and minimum mean radius. Once the subregions are defined, a dynamic programming method is used to find a minimum-time path for each vessel from its initial position to its assigned region. This path plan includes the effects of water currents as well as avoidance of known obstacles. A fleet-level planning algorithm then shuffles the individual vessel assignments to find the overall solution which puts all vessels in their assigned regions in the minimum time. This shuffle algorithm may be described as a process of elimination on the sorted list of permutations of a cost matrix. All these path planning algorithms are facilitated by discretizing the region of interest onto a hexagonal tiling.
Xia, Jiaqi; Peng, Zhenling; Qi, Dawei; Mu, Hongbo; Yang, Jianyi
2017-03-15
Protein fold classification is a critical step in protein structure prediction. There are two possible ways to classify protein folds. One is through template-based fold assignment and the other is ab-initio prediction using machine learning algorithms. Combination of both solutions to improve the prediction accuracy was never explored before. We developed two algorithms, HH-fold and SVM-fold for protein fold classification. HH-fold is a template-based fold assignment algorithm using the HHsearch program. SVM-fold is a support vector machine-based ab-initio classification algorithm, in which a comprehensive set of features are extracted from three complementary sequence profiles. These two algorithms are then combined, resulting to the ensemble approach TA-fold. We performed a comprehensive assessment for the proposed methods by comparing with ab-initio methods and template-based threading methods on six benchmark datasets. An accuracy of 0.799 was achieved by TA-fold on the DD dataset that consists of proteins from 27 folds. This represents improvement of 5.4-11.7% over ab-initio methods. After updating this dataset to include more proteins in the same folds, the accuracy increased to 0.971. In addition, TA-fold achieved >0.9 accuracy on a large dataset consisting of 6451 proteins from 184 folds. Experiments on the LE dataset show that TA-fold consistently outperforms other threading methods at the family, superfamily and fold levels. The success of TA-fold is attributed to the combination of template-based fold assignment and ab-initio classification using features from complementary sequence profiles that contain rich evolution information. http://yanglab.nankai.edu.cn/TA-fold/. yangjy@nankai.edu.cn or mhb-506@163.com. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
A genetic algorithm-based framework for wavelength selection on sample categorization.
Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F
2017-08-01
In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
High capacity low delay packet broadcasting multiaccess schemes for satellite repeater systems
NASA Astrophysics Data System (ADS)
Bose, S. K.
1980-12-01
Demand assigned packet radio schemes using satellite repeaters can achieve high capacities but often exhibit relatively large delays under low traffic conditions when compared to random access. Several schemes which improve delay performance at low traffic but which have high capacity are presented and analyzed. These schemes allow random acess attempts by users, who are waiting for channel assignments. The performance of these are considered in the context of a multiple point communication system carrying fixed length messages between geographically distributed (ground) user terminals which are linked via a satellite repeater. Channel assignments are done following a BCC queueing discipline by a (ground) central controller on the basis of requests correctly received over a collision type access channel. In TBACR Scheme A, some of the forward message channels are set aside for random access transmissions; the rest are used in a demand assigned mode. Schemes B and C operate all their forward message channels in a demand assignment mode but, by means of appropriate algorithms for trailer channel selection, allow random access attempts on unassigned channels. The latter scheme also introduces framing and slotting of the time axis to implement a more efficient algorithm for trailer channel selection than the former.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tolić, Nikola; Liu, Yina; Liyu, Andrey
Ultrahigh-resolution mass spectrometry, such as Fourier transform ion-cyclotron resonance mass spectrometry (FT-ICR MS), can resolve thousands of molecular ions in complex organic matrices. A Compound Identification Algorithm (CIA) was previously developed for automated elemental formula assignment for natural organic matter (NOM). In this work we describe a user friendly interface for CIA, titled Formularity, which includes an additional functionality to perform search of formulas based on an Isotopic Pattern Algorithm (IPA). While CIA assigns elemental formulas for compounds containing C, H, O, N, S, and P, IPA is capable of assigning formulas for compounds containing other elements. We used halogenatedmore » organic compounds (HOC), a chemical class that is ubiquitous in nature as well as anthropogenic systems, as an example to demonstrate the capability of Formularity with IPA. A HOC standard mix was used to evaluate the identification confidence of IPA. The HOC spike in NOM and tap water were used to assess HOC identification in natural and anthropogenic matrices. Strategies for reconciliation of CIA and IPA assignments are discussed. Software and sample databases with documentation are freely available from the PNNL OMICS software repository https://omics.pnl.gov/software/formularity.« less
Multi-wavelength differential absorption measurements of chemical species
NASA Astrophysics Data System (ADS)
Brown, David M.
The probability of accurate detection and quantification of airborne species is enhanced when several optical wavelengths are used to measure the differential absorption of molecular spectral features. Characterization of minor atmospheric constituents, biological hazards, and chemical plumes containing multiple species is difficult when using current approaches because of weak signatures and the use of a limited number of wavelengths used for identification. Current broadband systems such as Differential Optical Absorption Spectroscopy (DOAS) have either limitations for long-range propagation, or require transmitter power levels that are unsafe for operation in urban environments. Passive hyperspectral imaging systems that utilize absorption of solar scatter at visible and infrared wavelengths, or use absorption of background thermal emission, have been employed routinely for detection of airborne chemical species. Passive approaches have operational limitations at various ranges, or under adverse atmospheric conditions because the source intensity and spectrum is often an unknown variable. The work presented here describes a measurement approach that uses a known source of a low transmitted power level for an active system, while retaining the benefits of broadband and extremely long-path absorption operations. An optimized passive imaging system also is described that operates in the 3 to 4 mum window of the mid-infrared. Such active and passive instruments can be configured to optimize the detection of several hydrocarbon gases, as well as many other species of interest. Measurements have provided the incentive to develop algorithms for the calculations of atmospheric species concentrations using multiple wavelengths. These algorithms are used to prepare simulations and make comparisons with experimental results from absorption data of a supercontinuum laser source. The MODTRAN model is used in preparing the simulations, and also in developing additional algorithms to select filters for use with a MWIR (midwave infrared) imager for detection of plumes of methane, propane, gasoline vapor, and diesel vapor. These simulations were prepared for system designs operating on a down-looking airborne platform. A data analysis algorithm for use with a hydrocarbon imaging system extracts regions of interest from the field-of-view for further analysis. An error analysis is presented for a scanning DAS (Differential Absorption Spectroscopy) lidar system operating from an airborne platform that uses signals scattered from topographical targets. The analysis is built into a simulation program for testing real-time data processing approaches, and to gauge the effects on measurements of path column concentration due to ground reflectivity variations. An example simulation provides a description of the data expected for methane. Several accomplishments of this research include: (1) A new lidar technique for detection and measurement of concentrations of atmospheric species is demonstrated that uses a low-power supercontinuum source. (2) A new multi-wavelength algorithm, which demonstrates excellent performance, is applied to processing spectroscopic data collected by a longpath supercontinuum laser absorption instrument. (3) A simulation program for topographical scattering of a scanning DAS system is developed, and it is validated with aircraft data from the ITT Industries ANGEL (Airborne Natural Gas Emission Lidar) 3-lambda lidar system. (4) An error analysis procedure for DAS is developed, and is applied to measurements and simulations for an airborne platform. (5) A method for filter selection is developed and tested for use with an infrared imager that optimizes the detection for various hydrocarbons that absorb in the midwave infrared. (6) The development of a Fourier analysis algorithm is described that allows a user to rapidly separate hydrocarbon plumes from the background features in the field of view of an imaging system.
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
Studies on pulsed optogalvanic effect in Eu/Ne hollow cathode discharge.
Saini, V K; Kumar, P; Dixit, S K; Nakhe, S V
2014-07-01
The optogalvanic (OG) effect has been observed in a Eu/Ne hollow cathode discharge lamp using pulsed laser irradiation. An OG spectrum is recorded in dye laser wavelength region 574–602 nm using a boxcar-averager. In total 41 atomic lines are observed. Of these, 38 lines are assigned to neon transitions. Two lines observed corresponding to wavelengths 576.519 and 601.815 nm are assigned to europium transitions; (4f 7 6s 2 , S 8 7/2 →4f 7 6s6p , zP 6 7/2 ) and (4f 7 6s 2 , S 8 7/2 →4f 7 6s6p , zP 8 9/2 ), respectively, and the remaining line at 582.475 nm could not be assigned. The effect of the discharge current on europium as well as neon OG signals is also studied. At moderate discharge current values, an extra positive peak is observed in neon OG signal for the transition (1s 5 →2p 2 ) at 588.189 nm, which is explained by Penning-ionization process using the quasi-resonant energy transfer interactions between excited neon and europium atoms lying in 2p 2 and D 10 9/2 states, respectively.
NASA's Experience with UV Remote Using SBUV and TOMS Instruments
NASA Technical Reports Server (NTRS)
Bhartia, P. K.
1999-01-01
This paper will discuss key features of the NASA algorithm that has been used to produce several highly popular geophysical products from the Solar Backscatter Ultraviolet (SBUV) and Total Ozone Mapping Spectrometer (TOMS) series of instruments. Since these instruments have a limited number of wavelengths, many innovative algorithmic approaches have been developed over the years to derive maximum information from these sensors. We will use Global Ozone Monitoring Experiment (GOME) data to test the assumptions made in these algorithms and show what additional information is contained in the GOME hyperspectral data. At NASA we are using this information to improve the SBUV and TOMS algorithms, as well as to develop more efficient algorithms to process GOME data.
Evaluation of Sun Glint Correction Algorithms for High-Spatial Resolution Hyperspectral Imagery
2012-09-01
ACRONYMS AND ABBREVIATIONS AISA Airborne Imaging Spectrometer for Applications AVIRIS Airborne Visible/Infrared Imaging Spectrometer BIL Band...sensor bracket mount combining Airborne Imaging Spectrometer for Applications ( AISA ) Eagle and Hawk sensors into a single imaging system (SpecTIR 2011...The AISA Eagle is a VNIR sensor with a wavelength range of approximately 400–970 nm and the AISA Hawk sensor is a SWIR sensor with a wavelength
Impact of beacon wavelength on phase-compensation performance
NASA Astrophysics Data System (ADS)
Enterline, Allison A.; Spencer, Mark F.; Burrell, Derek J.; Brennan, Terry J.
2017-09-01
This study evaluates the effects of beacon-wavelength mismatch on phase-compensation performance. In general, beacon-wavelength mismatch occurs at the system level because the beacon-illuminator laser (BIL) and high-energy laser (HEL) are often at different wavelengths. Such is the case, for example, when using an aperture sharing element to isolate the beam-control sensor suite from the blinding nature of the HEL. With that said, this study uses the WavePlex Toolbox in MATLAB® to model ideal spherical wave propagation through various atmospheric-turbulence conditions. To quantify phase-compensation performance, we also model a nominal adaptive-optics (AO) system. We achieve correction from a Shack-Hartmann wavefront sensor and continuous-face-sheet deformable mirror using a least-squares phase reconstruction algorithm in the Fried geometry and a leaky integrator control law. To this end, we plot the power in the bucket metric as a function of BIL-HEL wavelength difference. Our initial results show that positive BIL-HEL wavelength differences achieve better phase compensation performance compared to negative BIL-HEL wavelength differences (i.e., red BILs outperform blue BILs). This outcome is consistent with past results.
ERIC Educational Resources Information Center
Stanford Univ., CA. School Mathematics Study Group.
This is the second unit of a 15-unit School Mathematics Study Group (SMSG) mathematics text for high school students. Topics presented in the first chapter (Informal Algorithms and Flow Charts) include: changing a flat tire; algorithms, flow charts, and computers; assignment and variables; input and output; using a variable as a counter; decisions…
The practical evaluation of DNA barcode efficacy.
Spouge, John L; Mariño-Ramírez, Leonardo
2012-01-01
This chapter describes a workflow for measuring the efficacy of a barcode in identifying species. First, assemble individual sequence databases corresponding to each barcode marker. A controlled collection of taxonomic data is preferable to GenBank data, because GenBank data can be problematic, particularly when comparing barcodes based on more than one marker. To ensure proper controls when evaluating species identification, specimens not having a sequence in every marker database should be discarded. Second, select a computer algorithm for assigning species to barcode sequences. No algorithm has yet improved notably on assigning a specimen to the species of its nearest neighbor within a barcode database. Because global sequence alignments (e.g., with the Needleman-Wunsch algorithm, or some related algorithm) examine entire barcode sequences, they generally produce better species assignments than local sequence alignments (e.g., with BLAST). No neighboring method (e.g., global sequence similarity, global sequence distance, or evolutionary distance based on a global alignment) has yet shown a notable superiority in identifying species. Finally, "the probability of correct identification" (PCI) provides an appropriate measurement of barcode efficacy. The overall PCI for a data set is the average of the species PCIs, taken over all species in the data set. This chapter states explicitly how to calculate PCI, how to estimate its statistical sampling error, and how to use data on PCR failure to set limits on how much improvements in PCR technology can improve species identification.
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Chughtai, Mohsan Niaz
2018-05-01
In this paper the IRIS (Integrated Router Interconnected spectrally), an optical domain architecture for datacenter network is analyzed. The IRIS integrated with advanced modulation formats (M-QAM) and coherent optical receiver is analyzed. The channel impairments are compensated using the DSP algorithms following the coherent receiver. The proposed scheme allows N2 multiplexed wavelengths for N×N size. The performance of the N×N-IRIS switch with and without wavelength conversion is analyzed for different Baud rates over M-QAM modulation formats. The performance of the system is analyzed in terms of bit error rate (BER) vs OSNR curves.
Amako, Jun; Shinozaki, Yu
2016-07-11
We report on a dual-wavelength diffractive beam splitter designed for use in parallel laser processing. This novel optical element generates two beam arrays of different wavelengths and allows their overlap at the process points on a workpiece. To design the deep surface-relief profile of a splitter using a simulated annealing algorithm, we introduce a heuristic but practical scheme to determine the maximum depth and the number of quantization levels. The designed corrugations were fabricated in a photoresist by maskless grayscale exposure using a high-resolution spatial light modulator. We characterized the photoresist splitter, thereby validating the proposed beam-splitting concept.
NASA Astrophysics Data System (ADS)
Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai
2016-05-01
The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.
An Experiment of GMPLS-Based Dispersion Compensation Control over In-Field Fibers
NASA Astrophysics Data System (ADS)
Seno, Shoichiro; Horiuchi, Eiichi; Yoshida, Sota; Sugihara, Takashi; Onohara, Kiyoshi; Kamei, Misato; Baba, Yoshimasa; Kubo, Kazuo; Mizuochi, Takashi
As ROADMs (Reconfigurable Optical Add/Drop Multiplexers) are becoming widely used in metro/core networks, distributed control of wavelength paths by extended GMPLS (Generalized MultiProtocol Label Switching) protocols has attracted much attention. For the automatic establishment of an arbitrary wavelength path satisfying dynamic traffic demands over a ROADM or WXC (Wavelength Cross Connect)-based network, precise determination of chromatic dispersion over the path and optimized assignment of dispersion compensation capabilities at related nodes are essential. This paper reports an experiment over in-field fibers where GMPLS-based control was applied for the automatic discovery of chromatic dispersion, path computation, and wavelength path establishment with dynamic adjustment of variable dispersion compensation. The GMPLS-based control scheme, which the authors called GMPLS-Plus, extended GMPLS's distributed control architecture with attributes for automatic discovery, advertisement, and signaling of chromatic dispersion. In this experiment, wavelength paths with distances of 24km and 360km were successfully established and error-free data transmission was verified. The experiment also confirmed path restoration with dynamic compensation adjustment upon fiber failure.
Multi-granularity Bandwidth Allocation for Large-Scale WDM/TDM PON
NASA Astrophysics Data System (ADS)
Gao, Ziyue; Gan, Chaoqin; Ni, Cuiping; Shi, Qiongling
2017-12-01
WDM (wavelength-division multiplexing)/TDM (time-division multiplexing) PON (passive optical network) is being viewed as a promising solution for delivering multiple services and applications, such as high-definition video, video conference and data traffic. Considering the real-time transmission, QoS (quality of services) requirements and differentiated services model, a multi-granularity dynamic bandwidth allocation (DBA) in both domains of wavelengths and time for large-scale hybrid WDM/TDM PON is proposed in this paper. The proposed scheme achieves load balance by using the bandwidth prediction. Based on the bandwidth prediction, the wavelength assignment can be realized fairly and effectively to satisfy the different demands of various classes. Specially, the allocation of residual bandwidth further augments the DBA and makes full use of bandwidth resources in the network. To further improve the network performance, two schemes named extending the cycle of one free wavelength (ECoFW) and large bandwidth shrinkage (LBS) are proposed, which can prevent transmission from interruption when the user employs more than one wavelength. The simulation results show the effectiveness of the proposed scheme.
Simultaneous three wavelength imaging with a scanning laser ophthalmoscope.
Reinholz, F; Ashman, R A; Eikelboom, R H
1999-11-01
Various imaging properties of scanning laser ophthalmoscopes (SLO) such as contrast or depth discrimination, are superior to those of the traditional photographic fundus camera. However, most SLO are monochromatic whereas photographic systems produce colour images, which inherently contain information over a broad wavelength range. An SLO system has been modified to allow simultaneous three channel imaging. Laser light sources in the visible and infrared spectrum were concurrently launched into the system. Using different wavelength triads, digital fundus images were acquired at high frame rates. Favourable wavelengths combinations were established and high contrast, true (red, green, blue) or false (red, green, infrared) colour images of the retina were recorded. The monochromatic frames which form the colour image exhibit improved distinctness of different retinal structures such as the nerve fibre layer, the blood vessels, and the choroid. A multi-channel SLO combines the advantageous imaging properties of a tunable, monochrome SLO with the benefits and convenience of colour ophthalmoscopy. The options to modify parameters such as wavelength, intensity, gain, beam profile, aperture sizes, independently for every channel assign a high degree of versatility to the system. Copyright 1999 Wiley-Liss, Inc.
Competitive game theoretic optimal routing in optical networks
NASA Astrophysics Data System (ADS)
Yassine, Abdulsalam; Kabranov, Ognian; Makrakis, Dimitrios
2002-09-01
Optical transport service providers need control and optimization strategies for wavelength management, network provisioning, restoration and protection, allowing them to define and deploy new services and maintain competitiveness. In this paper, we investigate a game theory based model for wavelength and flow assignment in multi wavelength optical networks, consisting of several backbone long-haul optical network transport service providers (TSPs) who are offering their services -in terms of bandwidth- to Internet service providers (ISPs). The ISPs act as brokers or agents between the TSP and end user. The agent (ISP) buys services (bandwidth) from the TSP. The TSPs compete among themselves to sell their services and maintain profitability. We present a case study, demonstrating the impact of different bandwidth broker demands on the supplier's profit and the price paid by the network broker.
Apaydin, Mehmet Serkan; Çatay, Bülent; Patrick, Nicholas; Donald, Bruce R
2011-05-01
Nuclear magnetic resonance (NMR) spectroscopy is an important experimental technique that allows one to study protein structure and dynamics in solution. An important bottleneck in NMR protein structure determination is the assignment of NMR peaks to the corresponding nuclei. Structure-based assignment (SBA) aims to solve this problem with the help of a template protein which is homologous to the target and has applications in the study of structure-activity relationship, protein-protein and protein-ligand interactions. We formulate SBA as a linear assignment problem with additional nuclear overhauser effect constraints, which can be solved within nuclear vector replacement's (NVR) framework (Langmead, C., Yan, A., Lilien, R., Wang, L. and Donald, B. (2003) A Polynomial-Time Nuclear Vector Replacement Algorithm for Automated NMR Resonance Assignments. Proc. the 7th Annual Int. Conf. Research in Computational Molecular Biology (RECOMB) , Berlin, Germany, April 10-13, pp. 176-187. ACM Press, New York, NY. J. Comp. Bio. , (2004), 11, pp. 277-298; Langmead, C. and Donald, B. (2004) An expectation/maximization nuclear vector replacement algorithm for automated NMR resonance assignments. J. Biomol. NMR , 29, 111-138). Our approach uses NVR's scoring function and data types and also gives the option of using CH and NH residual dipolar coupling (RDCs), instead of NH RDCs which NVR requires. We test our technique on NVR's data set as well as on four new proteins. Our results are comparable to NVR's assignment accuracy on NVR's test set, but higher on novel proteins. Our approach allows partial assignments. It is also complete and can return the optimum as well as near-optimum assignments. Furthermore, it allows us to analyze the information content of each data type and is easily extendable to accept new forms of input data, such as additional RDCs.
Robust vector quantization for noisy channels
NASA Technical Reports Server (NTRS)
Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.
1988-01-01
The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Zhu, Ye; Wang, Chunhui; Yu, Xiaosong; Liu, Chuan; Liu, Binglin; Zhang, Jie
2017-07-01
With the capacity increasing in optical networks enabled by spatial division multiplexing (SDM) technology, spatial division multiplexing elastic optical networks (SDM-EONs) attract much attention from both academic and industry. Super-channel is an important type of service provisioning in SDM-EONs. This paper focuses on the issue of super-channel construction in SDM-EONs. Mixed super-channel oriented routing, spectrum and core assignment (MS-RSCA) algorithm is proposed in SDM-EONs considering inter-core crosstalk. Simulation results show that MS-RSCA can improve spectrum resource utilization and reduce blocking probability significantly compared with the baseline RSCA algorithms.
NASA Astrophysics Data System (ADS)
Eyono Obono, S. D.; Basak, Sujit Kumar
2011-12-01
The general formulation of the assignment problem consists in the optimal allocation of a given set of tasks to a workforce. This problem is covered by existing literature for different domains such as distributed databases, distributed systems, transportation, packets radio networks, IT outsourcing, and teaching allocation. This paper presents a new version of the assignment problem for the allocation of academic tasks to staff members in departments with long leave opportunities. It presents the description of a workload allocation scheme and its algorithm, for the allocation of an equitable number of tasks in academic departments where long leaves are necessary.
Investigation of correlation classification techniques
NASA Technical Reports Server (NTRS)
Haskell, R. E.
1975-01-01
A two-step classification algorithm for processing multispectral scanner data was developed and tested. The first step is a single pass clustering algorithm that assigns each pixel, based on its spectral signature, to a particular cluster. The output of that step is a cluster tape in which a single integer is associated with each pixel. The cluster tape is used as the input to the second step, where ground truth information is used to classify each cluster using an iterative method of potentials. Once the clusters have been assigned to classes the cluster tape is read pixel-by-pixel and an output tape is produced in which each pixel is assigned to its proper class. In addition to the digital classification programs, a method of using correlation clustering to process multispectral scanner data in real time by means of an interactive color video display is also described.
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
Assigning categorical information to Japanese medical terms using MeSH and MEDLINE.
Onogi, Yuzo
2007-01-01
This paper reports on the assigning of MeSH (Medical Subject Headings) categories to Japanese terms in an English-Japanese dictionary using the titles and abstracts of articles indexed in MEDLINE. In a previous study, 30,000 of 80,000 terms in the dictionary were mapped to MeSH terms by normalized comparison. It was reasoned that if the remaining dictionary terms appeared in MEDLINE-indexed articles that are indexed using MeSH terms, then relevancies between the dictionary terms and MeSH terms could be calculated, and thus MeSH categories assigned. This study compares two approaches for calculating the weight matrix. One is the TF*IDF method and the other uses the inner product of two weight matrices. About 20,000 additional dictionary terms were identified in MEDLINE-indexed articles published between 2000 and 2004. The precision and recall of these algorithms were evaluated separately for MeSH terms and non-MeSH terms. Unfortunately, the precision and recall of the algorithms was not good, but this method will help with manual assignment of MeSH categories to dictionary terms.
A demand assignment control in international business satellite communications network
NASA Astrophysics Data System (ADS)
Nohara, Mitsuo; Takeuchi, Yoshio; Takahata, Fumio; Hirata, Yasuo
An experimental system is being developed for use in an international business satellite (IBS) communications network based on demand-assignment (DA) and TDMA techniques. This paper discusses its system design, in particular from the viewpoints of a network configuration, a DA control, and a satellite channel-assignment algorithm. A satellite channel configuration is also presented along with a tradeoff study on transmission rate, HPA output power, satellite resource efficiency, service quality, and so on.
NASA Astrophysics Data System (ADS)
Stoykova, Elena; Gotchev, Atanas; Sainov, Ventseslav
2011-01-01
Real-time accomplishment of a phase-shifting profilometry through simultaneous projection and recording of fringe patterns requires a reliable phase retrieval procedure. In the present work we consider a four-wavelength multi-camera system with four sinusoidal phase gratings for pattern projection that implements a four-step algorithm. Successful operation of the system depends on overcoming two challenges which stem out from the inherent limitations of the phase-shifting algorithm, namely the demand for a sinusoidal fringe profile and the necessity to ensure equal background and contrast of fringes in the recorded fringe patterns. As a first task, we analyze the systematic errors due to the combined influence of the higher harmonics and multi-wavelength illumination in the Fresnel diffraction zone considering the case when the modulation parameters of the four gratings are different. As a second task we simulate the system performance to evaluate the degrading effect of the speckle noise and the spatially varying fringe modulation at non-uniform illumination on the overall accuracy of the profilometric measurement. We consider the case of non-correlated speckle realizations in the recorded fringe patterns due to four-wavelength illumination. Finally, we apply a phase retrieval procedure which includes normalization, background removal and denoising of the recorded fringe patterns to both simulated and measured data obtained for a dome surface.
Iterative-Transform Phase Retrieval Using Adaptive Diversity
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.
Contact replacement for NMR resonance assignment.
Xiong, Fei; Pandurangan, Gopal; Bailey-Kellogg, Chris
2008-07-01
Complementing its traditional role in structural studies of proteins, nuclear magnetic resonance (NMR) spectroscopy is playing an increasingly important role in functional studies. NMR dynamics experiments characterize motions involved in target recognition, ligand binding, etc., while NMR chemical shift perturbation experiments identify and localize protein-protein and protein-ligand interactions. The key bottleneck in these studies is to determine the backbone resonance assignment, which allows spectral peaks to be mapped to specific atoms. This article develops a novel approach to address that bottleneck, exploiting an available X-ray structure or homology model to assign the entire backbone from a set of relatively fast and cheap NMR experiments. We formulate contact replacement for resonance assignment as the problem of computing correspondences between a contact graph representing the structure and an NMR graph representing the data; the NMR graph is a significantly corrupted, ambiguous version of the contact graph. We first show that by combining connectivity and amino acid type information, and exploiting the random structure of the noise, one can provably determine unique correspondences in polynomial time with high probability, even in the presence of significant noise (a constant number of noisy edges per vertex). We then detail an efficient randomized algorithm and show that, over a variety of experimental and synthetic datasets, it is robust to typical levels of structural variation (1-2 AA), noise (250-600%) and missings (10-40%). Our algorithm achieves very good overall assignment accuracy, above 80% in alpha-helices, 70% in beta-sheets and 60% in loop regions. Our contact replacement algorithm is implemented in platform-independent Python code. The software can be freely obtained for academic use by request from the authors.
NASA Astrophysics Data System (ADS)
Al-Muraeb, Ahmed Mohammed Maim
This dissertation presents new approaches to design photonic crystal fiber Bragg grating, which is a main component in wavelength-tunable fiber and solid-state laser (SSL) systems operating in eye-safe wavelength region (1.4 - 2 mum). Although they have their own name, fiber lasers can be categorized as SSL as they are being used in making Ion-doped SSL. Today however, fiber lasers compete with and threaten to replace most of high-power, bulk SSLs and even some gas lasers. Hence, an eye-safe dual-wavelength Tunable Fiber Ring Laser (TFRL) system is considered in this work. This work addresses: 1. Eye-safe region laser areas of applications, TFRL system description, and wavelength tuning mechanisms with focus on (1.8 - 2 mum) range. 2. Optimal design method for Fiber Bragg Grating (FBG) using the Bat Algorithm, with the novel Adaptive Position Update (APU-BA) (our work [1]). The latter enhances the search performance and accuracy of BA for FBG design. Also, APU-BA shows better search performance and higher accuracy against previously reported methods and algorithms. 3. Investigation and design of novel High-Birefringence Photonic Crystal Fiber (JIBPCF) structures based on the Binary Morse-Thue fractal Sequence (BMTS) [2]. The latter offers desirably higher birefringence and lower confinement loss with dispersion-free single-mode operation in the eye-safe region of interest (1.8 - 2 microm). 4. Combining the above results, for final design of the photonic crystal fiber Bragg grating device (serving as wavelength-selective reflector in TFRL). Fiber Bragg grating design and analysis were carried out using MATLAG RTM. Resulting in refractive index modulation over the designed FBG length for a given target FBG reflectance spectrum. Hexagonal standard Silica Glass solid-core 5-ring HB-PCF with circular air holes, is designed based on BMTS. COMSOL MultiphysicsRTM - Wave Optics Module is used in modeling and analysis for the design. Four BMTS formations were proposed, and compared in terms of PCF design parameters (mainly: birefringence). Fabrication in agreement with commercially available PCFs, are concerned in structure geometrical design.
Route towards cylindrical cloaking at visible frequencies using an optimization algorithm
NASA Astrophysics Data System (ADS)
Rottler, Andreas; Krüger, Benjamin; Heitmann, Detlef; Pfannkuche, Daniela; Mendach, Stefan
2012-12-01
We derive a model based on the Maxwell-Garnett effective-medium theory that describes a cylindrical cloaking shell composed of metal rods which are radially aligned in a dielectric host medium. We propose and demonstrate a minimization algorithm that calculates for given material parameters the optimal geometrical parameters of the cloaking shell such that its effective optical parameters fit the best to the required permittivity distribution for cylindrical cloaking. By means of sophisticated full-wave simulations we find that a cylindrical cloak with good performance using silver as the metal can be designed with our algorithm for wavelengths in the red part of the visible spectrum (623nm <λ<773nm). We also present a full-wave simulation of such a cloak at an exemplary wavelength of λ=729nm (ℏω=1.7eV) which indicates that our model is useful to find design rules of cloaks with good cloaking performance. Our calculations investigate a structure that is easy to fabricate using standard preparation techniques and therefore pave the way to a realization of guiding light around an object at visible frequencies, thus rendering it invisible.
Ensembles of radial basis function networks for spectroscopic detection of cervical precancer
NASA Technical Reports Server (NTRS)
Tumer, K.; Ramanujam, N.; Ghosh, J.; Richards-Kortum, R.
1998-01-01
The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy, fail to achieve a concurrently high sensitivity and specificity. In vivo fluorescence spectroscopy is a technique which quickly, noninvasively and quantitatively probes the biochemical and morphological changes that occur in precancerous tissue. A multivariate statistical algorithm was used to extract clinically useful information from tissue spectra acquired from 361 cervical sites from 95 patients at 337-, 380-, and 460-nm excitation wavelengths. The multivariate statistical analysis was also employed to reduce the number of fluorescence excitation-emission wavelength pairs required to discriminate healthy tissue samples from precancerous tissue samples. The use of connectionist methods such as multilayered perceptrons, radial basis function (RBF) networks, and ensembles of such networks was investigated. RBF ensemble algorithms based on fluorescence spectra potentially provide automated and near real-time implementation of precancer detection in the hands of nonexperts. The results are more reliable, direct, and accurate than those achieved by either human experts or multivariate statistical algorithms.
Supercontinuum Fourier transform spectrometry with balanced detection on a single photodiode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goncharov, Vasily; Hall, Gregory
Here, we have developed phase-sensitive signal detection and processing algorithms for Fourier transform spectrometers fitted with supercontinuum sources for applications requiring ultimate sensitivity. Similar to well-established approach of source noise cancellation through balanced detection of monochromatic light, our method is capable of reducing the relative intensity noise of polychromatic light by 40 dB. Unlike conventional balanced detection, which relies on differential absorption measured with a well matched pair of photo-detectors, our algorithm utilizes phase-sensitive differential detection on a single photodiode and is capable of the real-time correction for instabilities in supercontinuum spectral structure over a broad range of wavelengths. Inmore » the resulting method is universal in terms of applicable wavelengths and compatible with commercial spectrometers. We present a proof-of-principle experimental« less
Supercontinuum Fourier transform spectrometry with balanced detection on a single photodiode
Goncharov, Vasily; Hall, Gregory
2016-08-25
Here, we have developed phase-sensitive signal detection and processing algorithms for Fourier transform spectrometers fitted with supercontinuum sources for applications requiring ultimate sensitivity. Similar to well-established approach of source noise cancellation through balanced detection of monochromatic light, our method is capable of reducing the relative intensity noise of polychromatic light by 40 dB. Unlike conventional balanced detection, which relies on differential absorption measured with a well matched pair of photo-detectors, our algorithm utilizes phase-sensitive differential detection on a single photodiode and is capable of the real-time correction for instabilities in supercontinuum spectral structure over a broad range of wavelengths. Inmore » the resulting method is universal in terms of applicable wavelengths and compatible with commercial spectrometers. We present a proof-of-principle experimental« less
Future aircraft networks and schedules
NASA Astrophysics Data System (ADS)
Shu, Yan
2011-07-01
Because of the importance of air transportation scheduling, the emergence of small aircraft and the vision of future fuel-efficient aircraft, this thesis has focused on the study of aircraft scheduling and network design involving multiple types of aircraft and flight services. It develops models and solution algorithms for the schedule design problem and analyzes the computational results. First, based on the current development of small aircraft and on-demand flight services, this thesis expands a business model for integrating on-demand flight services with the traditional scheduled flight services. This thesis proposes a three-step approach to the design of aircraft schedules and networks from scratch under the model. In the first step, both a frequency assignment model for scheduled flights that incorporates a passenger path choice model and a frequency assignment model for on-demand flights that incorporates a passenger mode choice model are created. In the second step, a rough fleet assignment model that determines a set of flight legs, each of which is assigned an aircraft type and a rough departure time is constructed. In the third step, a timetable model that determines an exact departure time for each flight leg is developed. Based on the models proposed in the three steps, this thesis creates schedule design instances that involve almost all the major airports and markets in the United States. The instances of the frequency assignment model created in this thesis are large-scale non-convex mixed-integer programming problems, and this dissertation develops an overall network structure and proposes iterative algorithms for solving these instances. The instances of both the rough fleet assignment model and the timetable model created in this thesis are large-scale mixed-integer programming problems, and this dissertation develops subproblem schemes for solving these instances. Based on these solution algorithms, this dissertation also presents computational results of these large-scale instances. To validate the models and solution algorithms developed, this thesis also compares the daily flight schedules that it designs with the schedules of the existing airlines. Furthermore, it creates instances that represent different economic and fuel-prices conditions and derives schedules under these different conditions. In addition, it discusses the implication of using new aircraft in the future flight schedules. Finally, future research in three areas---model, computational method, and simulation for validation---is proposed.
Bor, E; Turduev, M; Kurt, H
2016-08-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction.
Bor, E.; Turduev, M.; Kurt, H.
2016-01-01
Photonic structure designs based on optimization algorithms provide superior properties compared to those using intuition-based approaches. In the present study, we numerically and experimentally demonstrate subwavelength focusing of light using wavelength scale absorption-free dielectric scattering objects embedded in an air background. An optimization algorithm based on differential evolution integrated into the finite-difference time-domain method was applied to determine the locations of each circular dielectric object with a constant radius and refractive index. The multiobjective cost function defined inside the algorithm ensures strong focusing of light with low intensity side lobes. The temporal and spectral responses of the designed compact photonic structure provided a beam spot size in air with a full width at half maximum value of 0.19λ, where λ is the wavelength of light. The experiments were carried out in the microwave region to verify numerical findings, and very good agreement between the two approaches was found. The subwavelength light focusing is associated with a strong interference effect due to nonuniformly arranged scatterers and an irregular index gradient. Improving the focusing capability of optical elements by surpassing the diffraction limit of light is of paramount importance in optical imaging, lithography, data storage, and strong light-matter interaction. PMID:27477060
Manycast routing, modulation level and spectrum assignment over elastic optical networks
NASA Astrophysics Data System (ADS)
Luo, Xiao; Zhao, Yang; Chen, Xue; Wang, Lei; Zhang, Min; Zhang, Jie; Ji, Yuefeng; Wang, Huitao; Wang, Taili
2017-07-01
Manycast is a point to multi-point transmission framework that requires a subset of destination nodes successfully reached. It is particularly applicable for dealing with large amounts of data simultaneously in bandwidth-hungry, dynamic and cloud-based applications. As rapid increasing of traffics in these applications, the elastic optical networks (EONs) may be relied on to achieve high throughput manycast. In terms of finer spectrum granularity, the EONs could reach flexible accessing to network spectrum and efficient providing exact spectrum resource to demands. In this paper, we focus on the manycast routing, modulation level and spectrum assignment (MA-RMLSA) problem in EONs. Both EONs planning with static manycast traffic and EONs provisioning with dynamic manycast traffic are investigated. An integer linear programming (ILP) model is formulated to derive MA-RMLSA problem in static manycast scenario. Then corresponding heuristic algorithm called manycast routing, modulation level and spectrum assignment genetic algorithm (MA-RMLSA-GA) is proposed to adapt for both static and dynamic manycast scenarios. The MA-RMLSA-GA optimizes MA-RMLSA problem in destination nodes selection, routing light-tree constitution, modulation level allocation and spectrum resource assignment jointly, to achieve an effective improvement in network performance. Simulation results reveal that MA-RMLSA strategies offered by MA-RMLSA-GA have slightly disparity from the optimal solutions provided by ILP model in static scenario. Moreover, the results demonstrate that MA-RMLSA-GA realizes a highly efficient MA-RMLSA strategy with the lowest blocking probability in dynamic scenario compared with benchmark algorithms.
On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment
Alonso-Mora, Javier; Samaranayake, Samitha; Wallar, Alex; Frazzoli, Emilio; Rus, Daniela
2017-01-01
Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems. PMID:28049820
On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment.
Alonso-Mora, Javier; Samaranayake, Samitha; Wallar, Alex; Frazzoli, Emilio; Rus, Daniela
2017-01-17
Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems.
Development of a see-through hollow cathode discharge lamp for (Li/Ne) optogalvanic studies
NASA Astrophysics Data System (ADS)
Saini, V. K.; Kumar, P.; Sarangpani, K. K.; Dixit, S. K.; Nakhe, S. V.
2017-09-01
Development of a demountable and see-through hollow cathode (HC) discharge lamp suitable for optogalvanic (OG) spectroscopy is described. The design of the HC lamp is simple, compact, and inexpensive. Lithium, investigated rarely by the OG method, is selected for cathode material as its isotopes are important for nuclear industry. The HC lamp is characterized electrically and optically for discharge oscillations free OG effect. Strong OG signals of lithium as well as neon (as buffer gas) are produced precisely upon copper vapor laser pumped tunable dye laser irradiation. The HC lamp is capable of generating a clean OG resonance spectrum in the available dye laser wavelength scanning range (627.5-676 nm) obtained with 4-(Dicyanomethylene)-2-methyl-6-(4-dimethylaminostyryl)-4H-pyran dye. About 28 resonant OG lines are explicitly observed. Majority of them have been identified using j-l coupling scheme and assigned to the well-known neon transitions. One line that corresponds to wavelength near about 670.80 nm is assigned to lithium and resolved for its fine (2S1/2 → 2P1/2, 3/2) transitions. These OG transitions allow 0.33 cm-1 accuracy and can be used to supplement the OG transition data available from other sources to calibrate the wavelength of a scanning dye laser with precision at atomic levels.
Development of a see-through hollow cathode discharge lamp for (Li/Ne) optogalvanic studies.
Saini, V K; Kumar, P; Sarangpani, K K; Dixit, S K; Nakhe, S V
2017-09-01
Development of a demountable and see-through hollow cathode (HC) discharge lamp suitable for optogalvanic (OG) spectroscopy is described. The design of the HC lamp is simple, compact, and inexpensive. Lithium, investigated rarely by the OG method, is selected for cathode material as its isotopes are important for nuclear industry. The HC lamp is characterized electrically and optically for discharge oscillations free OG effect. Strong OG signals of lithium as well as neon (as buffer gas) are produced precisely upon copper vapor laser pumped tunable dye laser irradiation. The HC lamp is capable of generating a clean OG resonance spectrum in the available dye laser wavelength scanning range (627.5-676 nm) obtained with 4-(Dicyanomethylene)-2-methyl-6-(4-dimethylaminostyryl)-4H-pyran dye. About 28 resonant OG lines are explicitly observed. Majority of them have been identified using j-l coupling scheme and assigned to the well-known neon transitions. One line that corresponds to wavelength near about 670.80 nm is assigned to lithium and resolved for its fine ( 2 S 1/2 → 2 P 1/2, 3/2 ) transitions. These OG transitions allow 0.33 cm -1 accuracy and can be used to supplement the OG transition data available from other sources to calibrate the wavelength of a scanning dye laser with precision at atomic levels.
Knowledge-Based Scheduling of Arrival Aircraft in the Terminal Area
NASA Technical Reports Server (NTRS)
Krzeczowski, K. J.; Davis, T.; Erzberger, H.; Lev-Ram, Israel; Bergh, Christopher P.
1995-01-01
A knowledge based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real time simulation. The scheduling system automatically sequences, assigns landing times, and assign runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithm is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reductions, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithm is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper describes the scheduling algorithms, gives examples of their use, and presents data regarding their potential benefits to the air traffic system.
Knowledge-based scheduling of arrival aircraft
NASA Technical Reports Server (NTRS)
Krzeczowski, K.; Davis, T.; Erzberger, H.; Lev-Ram, I.; Bergh, C.
1995-01-01
A knowledge-based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real-time simulation. The scheduling system automatically sequences, assigns landing times, and assigns runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithms is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real-time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reduction, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithms is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper will describe the scheduling algorithms, give examples of their use, and present data regarding their potential benefits to the air traffic system.
NASA Astrophysics Data System (ADS)
Liao, Luhua; Li, Lemin; Wang, Sheng
2006-12-01
We investigate the protection approach for dynamic multicast traffic under shared risk link group (SRLG) constraints in meshed wavelength-division-multiplexing optical networks. We present a shared protection algorithm called dynamic segment shared protection for multicast traffic (DSSPM), which can dynamically adjust the link cost according to the current network state and can establish a primary light-tree as well as corresponding SRLG-disjoint backup segments for a dependable multicast connection. A backup segment can efficiently share the wavelength capacity of its working tree and the common resources of other backup segments based on SRLG-disjoint constraints. The simulation results show that DSSPM not only can protect the multicast sessions against a single-SRLG breakdown, but can make better use of the wavelength resources and also lower the network blocking probability.
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Nakajima, T.; Takenaka, H.; Higurashi, A.
2013-12-01
We develop a new satellite remote sensing algorithm to retrieve the properties of aerosol particles in the atmosphere. In late years, high resolution and multi-wavelength, and multiple-angle observation data have been obtained by grand-based spectral radiometers and imaging sensors on board the satellite. With this development, optimized multi-parameter remote sensing methods based on the Bayesian theory have become popularly used (Turchin and Nozik, 1969; Rodgers, 2000; Dubovik et al., 2000). Additionally, a direct use of radiation transfer calculation has been employed for non-linear remote sensing problems taking place of look up table methods supported by the progress of computing technology (Dubovik et al., 2011; Yoshida et al., 2011). We are developing a flexible multi-pixel and multi-parameter remote sensing algorithm for aerosol optical properties. In this algorithm, the inversion method is a combination of the MAP method (Maximum a posteriori method, Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, we include a radiation transfer calculation code, Rstar (Nakajima and Tanaka, 1986, 1988), numerically solved each time in iteration for solution search. The Rstar-code has been directly used in the AERONET operational processing system (Dubovik and King, 2000). Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine mode, sea salt, and dust particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area. We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. In this test, we simulated satellite-observed radiances for a sub-domain consisting of 5 by 5 pixels by the Rstar code assuming wavelengths of 380, 674, 870 and 1600 [nm], atmospheric condition of the US standard atmosphere, and the several aerosol and ground surface conditions. The result of the experiment showed that AOTs of fine mode and dust particles, soot fraction and ground surface albedo at the wavelength of 674 [nm] are retrieved within absolute value differences of 0.04, 0.01, 0.06 and 0.006 from the true value, respectively, for the case of dark surface, and also, for the case of blight surface, 0.06, 0.03, 0.04 and 0.10 from the true value, respectively. We will conduct more tests to study the information contents of parameters needed for aerosol and land surface remote sensing with different boundary conditions among sub-domains.
Color lensless digital holographic microscopy with micrometer resolution.
Garcia-Sucerquia, Jorge
2012-05-15
Color digital lensless holographic microscopy with micrometer resolution is presented. Multiwavelength illumination of a biological sample and a posteriori color composition of the amplitude images individually reconstructed are used to obtain full-color representation of the microscopic specimen. To match the sizes of the reconstructed holograms for each wavelength, a reconstruction algorithm that allows for choosing the pixel size at the reconstruction plane independently of the wavelength and the reconstruction distance is used. The method is illustrated with experimental results.
A scoring algorithm for predicting the presence of adult asthma: a prospective derivation study.
Tomita, Katsuyuki; Sano, Hiroyuki; Chiba, Yasutaka; Sato, Ryuji; Sano, Akiko; Nishiyama, Osamu; Iwanaga, Takashi; Higashimoto, Yuji; Haraguchi, Ryuta; Tohda, Yuji
2013-03-01
To predict the presence of asthma in adult patients with respiratory symptoms, we developed a scoring algorithm using clinical parameters. We prospectively analysed 566 adult outpatients who visited Kinki University Hospital for the first time with complaints of nonspecific respiratory symptoms. Asthma was comprehensively diagnosed by specialists using symptoms, signs, and objective tools including bronchodilator reversibility and/or the assessment of bronchial hyperresponsiveness (BHR). Multiple logistic regression analysis was performed to categorise patients and determine the accuracy of diagnosing asthma. A scoring algorithm using the symptom-sign score was developed, based on diurnal variation of symptoms (1 point), recurrent episodes (2 points), medical history of allergic diseases (1 point), and wheeze sound (2 points). A score of >3 had 35% sensitivity and 97% specificity for discriminating between patients with and without asthma and assigned a high probability of having asthma (accuracy 90%). A score of 1 or 2 points assigned intermediate probability (accuracy 68%). After providing additional data of forced expiratory volume in 1 second/forced vital capacity (FEV(1)/FVC) ratio <0.7, the post-test probability of having asthma was increased to 93%. A score of 0 points assigned low probability (accuracy 31%). After providing additional data of positive reversibility, the post-test probability of having asthma was increased to 88%. This pragmatic diagnostic algorithm is useful for predicting the presence of adult asthma and for determining the appropriate time for consultation with a pulmonologist.
A novel algorithm for validating peptide identification from a shotgun proteomics search engine.
Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J
2013-03-01
Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.
Richardson, Keith; Denny, Richard; Hughes, Chris; Skilling, John; Sikora, Jacek; Dadlez, Michał; Manteca, Angel; Jung, Hye Ryung; Jensen, Ole Nørregaard; Redeker, Virginie; Melki, Ronald; Langridge, James I.; Vissers, Johannes P.C.
2013-01-01
A probability-based quantification framework is presented for the calculation of relative peptide and protein abundance in label-free and label-dependent LC-MS proteomics data. The results are accompanied by credible intervals and regulation probabilities. The algorithm takes into account data uncertainties via Poisson statistics modified by a noise contribution that is determined automatically during an initial normalization stage. Protein quantification relies on assignments of component peptides to the acquired data. These assignments are generally of variable reliability and may not be present across all of the experiments comprising an analysis. It is also possible for a peptide to be identified to more than one protein in a given mixture. For these reasons the algorithm accepts a prior probability of peptide assignment for each intensity measurement. The model is constructed in such a way that outliers of any type can be automatically reweighted. Two discrete normalization methods can be employed. The first method is based on a user-defined subset of peptides, while the second method relies on the presence of a dominant background of endogenous peptides for which the concentration is assumed to be unaffected. Normalization is performed using the same computational and statistical procedures employed by the main quantification algorithm. The performance of the algorithm will be illustrated on example data sets, and its utility demonstrated for typical proteomics applications. The quantification algorithm supports relative protein quantification based on precursor and product ion intensities acquired by means of data-dependent methods, originating from all common isotopically-labeled approaches, as well as label-free ion intensity-based data-independent methods. PMID:22871168
Is Mc Leod's Patent Pending Naturoptic Method for Restoring Healthy Vision Easy and Verifiable?
NASA Astrophysics Data System (ADS)
Niemi, Paul; McLeod, David; McLeod, Roger
2006-10-01
RDM asserts that he and people he has trained can assign visual tasks from standard vision assessment charts, or better replacements, proceeding through incremental changes and such rapid improvements that healthy vision can be restored. Mc Leod predicts that in visual tasks with pupil diameter changes, wavelengths change proportionally. A longer, quasimonochromatic wavelength interval is coincident with foveal cones, and rods. A shorter, partially overlapping interval separately aligns with extrafoveal cones. Wavelengths follow the Airy disk radius formula. Niemi can evaluate if it is true that visual health merely requires triggering and facilitating the demands of possibly overridden feedback signals. The method and process are designed so that potential Naturopathic and other select graduate students should be able to self-fund their higher- level educations from preferential franchising arrangements of earnings while they are in certain programs.
Russ, Daniel E.; Ho, Kwan-Yuet; Colt, Joanne S.; Armenti, Karla R.; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P.; Karagas, Margaret R.; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T.; Johnson, Calvin A.; Friesen, Melissa C.
2016-01-01
Background Mapping job titles to standardized occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiologic studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Methods Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14,983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in two occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. Results For 11,991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6- and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (kappa: 0.6–0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Conclusions Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiologic studies. PMID:27102331
ASSURED CLOUD COMPUTING UNIVERSITY CENTER OFEXCELLENCE (ACC UCOE)
2018-01-18
average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed...infrastructure security -Design of algorithms and techniques for real- time assuredness in cloud computing -Map-reduce task assignment with data locality...46 DESIGN OF ALGORITHMS AND TECHNIQUES FOR REAL- TIME ASSUREDNESS IN CLOUD COMPUTING
GeoSearcher: Location-Based Ranking of Search Engine Results.
ERIC Educational Resources Information Center
Watters, Carolyn; Amoudi, Ghada
2003-01-01
Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…
Fuzzy-logic based Q-Learning interference management algorithms in two-tier networks
NASA Astrophysics Data System (ADS)
Xu, Qiang; Xu, Zezhong; Li, Li; Zheng, Yan
2017-10-01
Unloading from macrocell network and enhancing coverage can be realized by deploying femtocells in the indoor scenario. However, the system performance of the two-tier network could be impaired by the co-tier and cross-tier interference. In this paper, a distributed resource allocation scheme is studied when each femtocell base station is self-governed and the resource cannot be assigned centrally through the gateway. A novel Q-Learning interference management scheme is proposed, that is divided into cooperative and independent part. In the cooperative algorithm, the interference information is exchanged between the cell-edge users which are classified by the fuzzy logic in the same cell. Meanwhile, we allocate the orthogonal subchannels to the high-rate cell-edge users to disperse the interference power when the data rate requirement is satisfied. The resource is assigned directly according to the minimum power principle in the independent algorithm. Simulation results are provided to demonstrate the significant performance improvements in terms of the average data rate, interference power and energy efficiency over the cutting-edge resource allocation algorithms.
NASA Astrophysics Data System (ADS)
Mehta, Dalip Singh; Sharma, Anuradha; Dubey, Vishesh; Singh, Veena; Ahmad, Azeem
2016-03-01
We present a single-shot white light interference microscopy for the quantitative phase imaging (QPI) of biological cells and tissues. A common path white light interference microscope is developed and colorful white light interferogram is recorded by three-chip color CCD camera. The recorded white light interferogram is decomposed into the red, green and blue color wavelength component interferograms and processed it to find out the RI for different color wavelengths. The decomposed interferograms are analyzed using local model fitting (LMF)" algorithm developed for reconstructing the phase map from single interferogram. LMF is slightly off-axis interferometric QPI method which is a single-shot method that employs only a single image, so it is fast and accurate. The present method is very useful for dynamic process where path-length changes at millisecond level. From the single interferogram a wavelength-dependent quantitative phase imaging of human red blood cells (RBCs) are reconstructed and refractive index is determined. The LMF algorithm is simple to implement and is efficient in computation. The results are compared with the conventional phase shifting interferometry and Hilbert transform techniques.
Novel density-based and hierarchical density-based clustering algorithms for uncertain data.
Zhang, Xianchao; Liu, Han; Zhang, Xiaotong
2017-09-01
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Côté, Nicolas; Bedwani, Stéphane; Carrier, Jean-François, E-mail: jean-francois.carrier.chum@ssss.gouv.qc.ca
Purpose: An improvement in tissue assignment for low-dose rate brachytherapy (LDRB) patients using more accurate Monte Carlo (MC) dose calculation was accomplished with a metallic artifact reduction (MAR) method specific to dual-energy computed tomography (DECT). Methods: The proposed MAR algorithm followed a four-step procedure. The first step involved applying a weighted blend of both DECT scans (I {sub H/L}) to generate a new image (I {sub Mix}). This action minimized Hounsfield unit (HU) variations surrounding the brachytherapy seeds. In the second step, the mean HU of the prostate in I {sub Mix} was calculated and shifted toward the mean HUmore » of the two original DECT images (I {sub H/L}). The third step involved smoothing the newly shifted I {sub Mix} and the two original I {sub H/L}, followed by a subtraction of both, generating an image that represented the metallic artifact (I {sub A,(H/L)}) of reduced noise levels. The final step consisted of subtracting the original I {sub H/L} from the newly generated I {sub A,(H/L)} and obtaining a final image corrected for metallic artifacts. Following the completion of the algorithm, a DECT stoichiometric method was used to extract the relative electronic density (ρ{sub e}) and effective atomic number (Z {sub eff}) at each voxel of the corrected scans. Tissue assignment could then be determined with these two newly acquired physical parameters. Each voxel was assigned the tissue bearing the closest resemblance in terms of ρ{sub e} and Z {sub eff}, comparing with values from the ICRU 42 database. A MC study was then performed to compare the dosimetric impacts of alternative MAR algorithms. Results: An improvement in tissue assignment was observed with the DECT MAR algorithm, compared to the single-energy computed tomography (SECT) approach. In a phantom study, tissue misassignment was found to reach 0.05% of voxels using the DECT approach, compared with 0.40% using the SECT method. Comparison of the DECT and SECT D {sub 90} dose parameter (volume receiving 90% of the dose) indicated that D {sub 90} could be underestimated by up to 2.3% using the SECT method. Conclusions: The DECT MAR approach is a simple alternative to reduce metallic artifacts found in LDRB patient scans. Images can be processed quickly and do not require the determination of x-ray spectra. Substantial information on density and atomic number can also be obtained. Furthermore, calcifications within the prostate are detected by the tissue assignment algorithm. This enables more accurate, patient-specific MC dose calculations.« less
Zhang, Tao; Gao, Feng; Muhamedsalih, Hussam; Lou, Shan; Martin, Haydn; Jiang, Xiangqian
2018-03-20
The phase slope method which estimates height through fringe pattern frequency and the algorithm which estimates height through the fringe phase are the fringe analysis algorithms widely used in interferometry. Generally they both extract the phase information by filtering the signal in frequency domain after Fourier transform. Among the numerous papers in the literature about these algorithms, it is found that the design of the filter, which plays an important role, has never been discussed in detail. This paper focuses on the filter design in these algorithms for wavelength scanning interferometry (WSI), trying to optimize the parameters to acquire the optimal results. The spectral characteristics of the interference signal are analyzed first. The effective signal is found to be narrow-band (near single frequency), and the central frequency is calculated theoretically. Therefore, the position of the filter pass-band is determined. The width of the filter window is optimized with the simulation to balance the elimination of the noise and the ringing of the filter. Experimental validation of the approach is provided, and the results agree very well with the simulation. The experiment shows that accuracy can be improved by optimizing the filter design, especially when the signal quality, i.e., the signal noise ratio (SNR), is low. The proposed method also shows the potential of improving the immunity to the environmental noise by adapting the signal to acquire the optimal results through designing an adaptive filter once the signal SNR can be estimated accurately.
NASA Astrophysics Data System (ADS)
Fukuda, Satoru; Nakajima, Teruyuki; Takenaka, Hideaki; Higurashi, Akiko; Kikuchi, Nobuyuki; Nakajima, Takashi Y.; Ishida, Haruma
2013-12-01
satellite aerosol retrieval algorithm was developed to utilize a near-ultraviolet band of the Greenhouse gases Observing SATellite/Thermal And Near infrared Sensor for carbon Observation (GOSAT/TANSO)-Cloud and Aerosol Imager (CAI). At near-ultraviolet wavelengths, the surface reflectance over land is smaller than that at visible wavelengths. Therefore, it is thought possible to reduce retrieval error by using the near-ultraviolet spectral region. In the present study, we first developed a cloud shadow detection algorithm that uses first and second minimum reflectances of 380 nm and 680 nm based on the difference in Rayleigh scattering contribution for these two bands. Then, we developed a new surface reflectance correction algorithm, the modified Kaufman method, which uses minimum reflectance data at 680 nm and the NDVI to estimate the surface reflectance at 380 nm. This algorithm was found to be particularly effective at reducing the aerosol effect remaining in the 380 nm minimum reflectance; this effect has previously proven difficult to remove owing to the infrequent sampling rate associated with the three-day recursion period of GOSAT and the narrow CAI swath of 1000 km. Finally, we applied these two algorithms to retrieve aerosol optical thicknesses over a land area. Our results exhibited better agreement with sun-sky radiometer observations than results obtained using a simple surface reflectance correction technique using minimum radiances.
An ontology-based nurse call management system (oNCS) with probabilistic priority assessment
2011-01-01
Background The current, place-oriented nurse call systems are very static. A patient can only make calls with a button which is fixed to a wall of a room. Moreover, the system does not take into account various factors specific to a situation. In the future, there will be an evolution to a mobile button for each patient so that they can walk around freely and still make calls. The system would become person-oriented and the available context information should be taken into account to assign the correct nurse to a call. The aim of this research is (1) the design of a software platform that supports the transition to mobile and wireless nurse call buttons in hospitals and residential care and (2) the design of a sophisticated nurse call algorithm. This algorithm dynamically adapts to the situation at hand by taking the profile information of staff members and patients into account. Additionally, the priority of a call probabilistically depends on the risk factors, assigned to a patient. Methods The ontology-based Nurse Call System (oNCS) was developed as an extension of a Context-Aware Service Platform. An ontology is used to manage the profile information. Rules implement the novel nurse call algorithm that takes all this information into account. Probabilistic reasoning algorithms are designed to determine the priority of a call based on the risk factors of the patient. Results The oNCS system is evaluated through a prototype implementation and simulations, based on a detailed dataset obtained from Ghent University Hospital. The arrival times of nurses at the location of a call, the workload distribution of calls amongst nurses and the assignment of priorities to calls are compared for the oNCS system and the current, place-oriented nurse call system. Additionally, the performance of the system is discussed. Conclusions The execution time of the nurse call algorithm is on average 50.333 ms. Moreover, the oNCS system significantly improves the assignment of nurses to calls. Calls generally have a nurse present faster and the workload-distribution amongst the nurses improves. PMID:21294860
NASA Astrophysics Data System (ADS)
2016-04-01
The strong El Nino event in 2015 resulted in below normal rainfall leading to very dry conditions throughout Indonesia from August though October 2015. These conditions in turn allowed for exceptionally large numbers of biomass burning fires with very high emissions of aerosols. Over the island of Borneo, three AERONET sites (Palangkaraya, Pontianak, and Kuching) measured monthly mean fine mode aerosol optical depth (AOD) at 500 nm from the spectral deconvolution algorithm in September and October ranging from 1.6 to 3.7, with daily average AOD as high as 6.1. In fact, the AOD was sometimes too high to obtain any significant signal in the mid-visible wavelengths, therefore a previously developed new algorithm in the AERONET Version 3 database was invoked to retain the measurements in as many of the red and near-infrared wavelengths (675, 870, 1020, and 1640 nm) as possible to analyze the AOD in those wavelengths. These AOD at longer wavelengths are then utilized to provide some estimate the AOD in the mid-visible. Additionally, satellite retrievals of AOD at 550 nm from MODIS sensor data and the Dark Target, Beep Blue, and MAIAC algorithms were also analyzed and compared to AERONET measured AOD. Not surprisingly, the AOD was often too high for the satellite algorithms to also measure accurate AOD on many days in the densest smoke regions. The AERONET sky radiance inversion algorithm was utilized to analyze retrievals of the aerosol optical properties of complex refractive indices and size distributions. Since the AOD was often extremely high there was sometimes insufficient direct sun signal for the larger solar zenith angles (> 50 degrees) required for almucantar retrievals. However, the new hybrid sky radiance scan can attain sufficient scattering angle range even at small solar zenith angles when 440 nm direct beam irradiance can be accurately measured, thereby allowing for many more retrievals and also at higher AOD levels during this event. Due to extreme dryness occurring in the region, significant biomass burning of peat soils occurred in some areas. The retrieved volume median radius of the fine mode increased from ~0.18 micron to ~0.25 micron as AOD increased from 1 to 3 at 440 nm. These are very large size particles for biomass burning aerosol and are similar in size to smoke particles measured in Alaska during the very dry years of 2004 and 2005 when peat soil burning also contributed to the fuel burned. The average single scattering albedo over the wavelength range of 440 to 1020 nm was very high ranging from ~0.96 to 0.98, indicative of dominant smoldering phase combustion. These very high values of single scattering albedo for biomass burning aerosols are similar to those retrieved by AERONET for the Alaska smoke in 2004 and 2005.
Solving multiconstraint assignment problems using learning automata.
Horn, Geir; Oommen, B John
2010-02-01
This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the pioneering LA solutions to this problem, unequivocally demonstrates that LA can play an important role in solving complex combinatorial and integer optimization problems.
Resolution Study of a Hyperspectral Sensor using Computed Tomography in the Presence of Noise
2012-06-14
diffraction efficiency is dependent on wavelength. Compared to techniques developed by later work, simple algebraic reconstruction techniques were used...spectral di- mension, using computed tomography (CT) techniques with only a finite number of diverse images. CTHIS require a reconstruction algorithm in...many frames are needed to reconstruct the spectral cube of a simple object using a theoretical lower bound. In this research a new algorithm is derived
Optimization of multicast optical networks with genetic algorithm
NASA Astrophysics Data System (ADS)
Lv, Bo; Mao, Xiangqiao; Zhang, Feng; Qin, Xi; Lu, Dan; Chen, Ming; Chen, Yong; Cao, Jihong; Jian, Shuisheng
2007-11-01
In this letter, aiming to obtain the best multicast performance of optical network in which the video conference information is carried by specified wavelength, we extend the solutions of matrix games with the network coding theory and devise a new method to solve the complex problems of multicast network switching. In addition, an experimental optical network has been testified with best switching strategies by employing the novel numerical solution designed with an effective way of genetic algorithm. The result shows that optimal solutions with genetic algorithm are accordance with the ones with the traditional fictitious play method.
NASA Astrophysics Data System (ADS)
Bellotti, A.; Steffes, P. G.
2016-12-01
The Juno Microwave Radiometer (MWR) has six channels ranging from 1.36-50 cm and the ability to peer deep into the Jovian atmosphere. An Artifical Neural Network algorithm has been developed to rapidly perform inversion for the deep abundance of ammonia, the deep abundance of water vapor, and atmospheric "stretch" (a parameter that reflects the deviation from a wet adiabate in the higher atmosphere). This algorithm is "trained" by using simulated emissions at the six wavelengths computed using the Juno atmospheric microwave radiative transfer (JAMRT) model presented by Oyafuso et al. (This meeting). By exploiting the emission measurements conducted at six wavelengths and at various incident angles, the neural network can provide preliminary results to a useful precison in a computational method hundreds of times faster than conventional methods. This can quickly provide important insights into the variability and structure of the Jovian atmosphere.
Boehnke, Mitchell; Patel, Nayana; McKinney, Kristin; Clark, Toshimasa
The Society of Radiologists in Ultrasound (SRU 2005) and American Thyroid Association (ATA 2009 and ATA 2015) have published algorithms regarding thyroid nodule management. Kwak et al. and other groups have described models that estimate thyroid nodules' malignancy risk. The aim of our study is to use Kwak's model to evaluate the tradeoffs of both sensitivity and specificity of SRU 2005, ATA 2009 and ATA 2015 management algorithms. 1,000,000 thyroid nodules were modeled in MATLAB. Ultrasound characteristics were modeled after published data. Malignancy risk was estimated per Kwak's model and assigned as a binary variable. All nodules were then assessed using the published management algorithms. With the malignancy variable as condition positivity and algorithms' recommendation for FNA as test positivity, diagnostic performance was calculated. Modeled nodule characteristics mimic those of Kwak et al. 12.8% nodules were assigned as malignant (malignancy risk range of 2.0-98%). FNA was recommended for 41% of nodules by SRU 2005, 66% by ATA 2009, and 82% by ATA 2015. Sensitivity and specificity is significantly different (< 0.0001): 49% and 60% for SRU; 81% and 36% for ATA 2009; and 95% and 20% for ATA 2015. SRU 2005, ATA 2009 and ATA 2015 algorithms are used routinely in clinical practice to determine whether thyroid nodule biopsy is indicated. We demonstrate significant differences in these algorithms' diagnostic performance, which result in a compromise between sensitivity and specificity. Copyright © 2017 Elsevier Inc. All rights reserved.
Fast algorithm for automatically computing Strahler stream order
Lanfear, Kenneth J.
1990-01-01
An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.
Full waveform inversion using a decomposed single frequency component from a spectrogram
NASA Astrophysics Data System (ADS)
Ha, Jiho; Kim, Seongpil; Koo, Namhyung; Kim, Young-Ju; Woo, Nam-Sub; Han, Sang-Mok; Chung, Wookeen; Shin, Sungryul; Shin, Changsoo; Lee, Jaejoon
2018-06-01
Although many full waveform inversion methods have been developed to construct velocity models of subsurface, various approaches have been presented to obtain an inversion result with long-wavelength features even though seismic data lacking low-frequency components were used. In this study, a new full waveform inversion algorithm was proposed to recover a long-wavelength velocity model that reflects the inherent characteristics of each frequency component of seismic data using a single-frequency component decomposed from the spectrogram. We utilized the wavelet transform method to obtain the spectrogram, and the decomposed signal from the spectrogram was used as transformed data. The Gauss-Newton method with the diagonal elements of an approximate Hessian matrix was used to update the model parameters at each iteration. Based on the results of time-frequency analysis in the spectrogram, numerical tests with some decomposed frequency components were performed using a modified SEG/EAGE salt dome (A-A‧) line to demonstrate the feasibility of the proposed inversion algorithm. This demonstrated that a reasonable inverted velocity model with long-wavelength structures can be obtained using a single frequency component. It was also confirmed that when strong noise occurs in part of the frequency band, it is feasible to obtain a long-wavelength velocity model from the noise data with a frequency component that is less affected by the noise. Finally, it was confirmed that the results obtained from the spectrogram inversion can be used as an initial velocity model in conventional inversion methods.
Overcoming an obstacle in expanding a UMLS semantic type extent.
Chen, Yan; Gu, Huanying; Perl, Yehoshua; Geller, James
2012-02-01
This paper strives to overcome a major problem encountered by a previous expansion methodology for discovering concepts highly likely to be missing a specific semantic type assignment in the UMLS. This methodology is the basis for an algorithm that presents the discovered concepts to a human auditor for review and possible correction. We analyzed the problem of the previous expansion methodology and discovered that it was due to an obstacle constituted by one or more concepts assigned the UMLS Semantic Network semantic type Classification. A new methodology was designed that bypasses such an obstacle without a combinatorial explosion in the number of concepts presented to the human auditor for review. The new expansion methodology with obstacle avoidance was tested with the semantic type Experimental Model of Disease and found over 500 concepts missed by the previous methodology that are in need of this semantic type assignment. Furthermore, other semantic types suffering from the same major problem were discovered, indicating that the methodology is of more general applicability. The algorithmic discovery of concepts that are likely missing a semantic type assignment is possible even in the face of obstacles, without an explosion in the number of processed concepts. Copyright © 2011 Elsevier Inc. All rights reserved.
Overcoming an Obstacle in Expanding a UMLS Semantic Type Extent
Chen, Yan; Gu, Huanying; Perl, Yehoshua; Geller, James
2011-01-01
This paper strives to overcome a major problem encountered by a previous expansion methodology for discovering concepts highly likely to be missing a specific semantic type assignment in the UMLS. This methodology is the basis for an algorithm that presents the discovered concepts to a human auditor for review and possible correction. We analyzed the problem of the previous expansion methodology and discovered that it was due to an obstacle constituted by one or more concepts assigned the UMLS Semantic Network semantic type Classification. A new methodology was designed that bypasses such an obstacle without a combinatorial explosion in the number of concepts presented to the human auditor for review. The new expansion methodology with obstacle avoidance was tested with the semantic type Experimental Model of Disease and found over 500 concepts missed by the previous methodology that are in need of this semantic type assignment. Furthermore, other semantic types suffering from the same major problem were discovered, indicating that the methodology is of more general applicability. The algorithmic discovery of concepts that are likely missing a semantic type assignment is possible even in the face of obstacles, without an explosion in the number of processed concepts. PMID:21925287
Relationship auditing of the FMA ontology
Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai
2010-01-01
The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727
Teutsch, T; Mesch, M; Giessen, H; Tarin, C
2015-01-01
In this contribution, a method to select discrete wavelengths that allow an accurate estimation of the glucose concentration in a biosensing system based on metamaterials is presented. The sensing concept is adapted to the particular application of ophthalmic glucose sensing by covering the metamaterial with a glucose-sensitive hydrogel and the sensor readout is performed optically. Due to the fact that in a mobile context a spectrometer is not suitable, few discrete wavelengths must be selected to estimate the glucose concentration. The developed selection methods are based on nonlinear support vector regression (SVR) models. Two selection methods are compared and it is shown that wavelengths selected by a sequential forward feature selection algorithm achieves an estimation improvement. The presented method can be easily applied to different metamaterial layouts and hydrogel configurations.
Software defined multi-OLT passive optical network for flexible traffic allocation
NASA Astrophysics Data System (ADS)
Zhang, Shizong; Gu, Rentao; Ji, Yuefeng; Zhang, Jiawei; Li, Hui
2016-10-01
With the rapid growth of 4G mobile network and vehicular network services mobile terminal users have increasing demand on data sharing among different radio remote units (RRUs) and roadside units (RSUs). Meanwhile, commercial video-streaming, video/voice conference applications delivered through peer-to-peer (P2P) technology are still keep on stimulating the sharp increment of bandwidth demand in both business and residential subscribers. However, a significant issue is that, although wavelength division multiplexing (WDM) and orthogonal frequency division multiplexing (OFDM) technology have been proposed to fulfil the ever-increasing bandwidth demand in access network, the bandwidth of optical fiber is not unlimited due to the restriction of optical component properties and modulation/demodulation technology, and blindly increase the wavelength cannot meet the cost-sensitive characteristic of the access network. In this paper, we propose a software defined multi-OLT PON architecture to support efficient scheduling of access network traffic. By introducing software defined networking technology and wavelength selective switch into TWDM PON system in central office, multiple OLTs can be considered as a bandwidth resource pool and support flexible traffic allocation for optical network units (ONUs). Moreover, under the configuration of the control plane, ONUs have the capability of changing affiliation between different OLTs under different traffic situations, thus the inter-OLT traffic can be localized and the data exchange pressure of the core network can be released. Considering this architecture is designed to be maximum following the TWDM PON specification, the existing optical distribution network (ODN) investment can be saved and conventional EPON/GPON equipment can be compatible with the proposed architecture. What's more, based on this architecture, we propose a dynamic wavelength scheduling algorithm, which can be deployed as an application on control plane and achieve effective scheduling OLT wavelength resources between different OLTs based on various traffic situation. Simulation results show that, by using the scheduling algorithm, network traffic between different OLTs can be optimized effectively, and the wavelength utilization of the multi-OLT system can be improved due to the flexible wavelength scheduling.
ERIC Educational Resources Information Center
Green, Gareth P.; Bean, John C.; Peterson, Dean J.
2013-01-01
Intermediate microeconomics is typically viewed as a theory and tools course that relies on algorithmic problems to help students learn and apply economic theory. However, the authors' assessment research suggests that algorithmic problems by themselves do not encourage students to think about where the theory comes from, why the theory is…
NASA Astrophysics Data System (ADS)
Xu, Shuo; Ji, Ze; Truong Pham, Duc; Yu, Fan
2011-11-01
The simultaneous mission assignment and home allocation for hospital service robots studied is a Multidimensional Assignment Problem (MAP) with multiobjectives and multiconstraints. A population-based metaheuristic, the Binary Bees Algorithm (BBA), is proposed to optimize this NP-hard problem. Inspired by the foraging mechanism of honeybees, the BBA's most important feature is an explicit functional partitioning between global search and local search for exploration and exploitation, respectively. Its key parts consist of adaptive global search, three-step elitism selection (constraint handling, non-dominated solutions selection, and diversity preservation), and elites-centred local search within a Hamming neighbourhood. Two comparative experiments were conducted to investigate its single objective optimization, optimization effectiveness (indexed by the S-metric and C-metric) and optimization efficiency (indexed by computational burden and CPU time) in detail. The BBA outperformed its competitors in almost all the quantitative indices. Hence, the above overall scheme, and particularly the searching history-adapted global search strategy was validated.
Esteki, M; Nouroozi, S; Shahsavari, Z
2016-02-01
To develop a simple and efficient spectrophotometric technique combined with chemometrics for the simultaneous determination of methyl paraben (MP) and hydroquinone (HQ) in cosmetic products, and specifically, to: (i) evaluate the potential use of successive projections algorithm (SPA) to derivative spectrophotometric data in order to provide sufficient accuracy and model robustness and (ii) determine MP and HQ concentration in cosmetics without tedious pre-treatments such as derivatization or extraction techniques which are time-consuming and require hazardous solvents. The absorption spectra were measured in the wavelength range of 200-350 nm. Prior to performing chemometric models, the original and first-derivative absorption spectra of binary mixtures were used as calibration matrices. Variable selected by successive projections algorithm was used to obtain multiple linear regression (MLR) models based on a small subset of wavelengths. The number of wavelengths and the starting vector were optimized, and the comparison of the root mean square error of calibration (RMSEC) and cross-validation (RMSECV) was applied to select effective wavelengths with the least collinearity and redundancy. Principal component regression (PCR) and partial least squares (PLS) were also developed for comparison. The concentrations of the calibration matrix ranged from 0.1 to 20 μg mL(-1) for MP, and from 0.1 to 25 μg mL(-1) for HQ. The constructed models were tested on an external validation data set and finally cosmetic samples. The results indicated that successive projections algorithm-multiple linear regression (SPA-MLR), applied on the first-derivative spectra, achieved the optimal performance for two compounds when compared with the full-spectrum PCR and PLS. The root mean square error of prediction (RMSEP) was 0.083, 0.314 for MP and HQ, respectively. To verify the accuracy of the proposed method, a recovery study on real cosmetic samples was carried out with satisfactory results (84-112%). The proposed method, which is an environmentally friendly approach, using minimum amount of solvent, is a simple, fast and low-cost analysis method that can provide high accuracy and robust models. The suggested method does not need any complex extraction procedure which is time-consuming and requires hazardous solvents. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Single machine scheduling with slack due dates assignment
NASA Astrophysics Data System (ADS)
Liu, Weiguo; Hu, Xiangpei; Wang, Xuyin
2017-04-01
This paper considers a single machine scheduling problem in which each job is assigned an individual due date based on a common flow allowance (i.e. all jobs have slack due date). The goal is to find a sequence for jobs, together with a due date assignment, that minimizes a non-regular criterion comprising the total weighted absolute lateness value and common flow allowance cost, where the weight is a position-dependent weight. In order to solve this problem, an ? time algorithm is proposed. Some extensions of the problem are also shown.
Minimal-delay traffic grooming for WDM star networks
NASA Astrophysics Data System (ADS)
Choi, Hongsik; Garg, Nikhil; Choi, Hyeong-Ah
2003-10-01
All-optical networks face the challenge of reducing slower opto-electronic conversions by managing assignment of traffic streams to wavelengths in an intelligent manner, while at the same time utilizing bandwidth resources to the maximum. This challenge becomes harder in networks closer to the end users that have insufficient data to saturate single wavelengths as well as traffic streams outnumbering the usable wavelengths, resulting in traffic grooming which requires costly traffic analysis at access nodes. We study the problem of traffic grooming that reduces the need to analyze traffic, for a class of network architecture most used by Metropolitan Area Networks; the star network. The problem being NP-complete, we provide an efficient twice-optimal-bound greedy heuristic for the same, that can be used to intelligently groom traffic at the LANs to reduce latency at the access nodes. Simulation results show that our greedy heuristic achieves a near-optimal solution.
NASA Astrophysics Data System (ADS)
McCarthy, Annemarie; Ruth, Albert A.
2013-11-01
Two distinct S0 → S1 fluorescence excitation spectra of methyl-2-hydroxy-3-napthoate (MHN23) have been obtained by monitoring fluorescence separately in the short (˜410 nm) and long (˜650 nm) wavelength emission bands. The short wavelength fluorescence is assigned to two MHN23 conformers which do not undergo excited state intramolecular proton transfer (ESIPT). Analysis of the 'long wavelength' fluorescence excitation spectrum, which arises from the proton transfer tautomer of MHN23 indicates an average lifetime of τ ⩾ 18 ± 2 fs for the initially excited states. Invoking the results of Catalan et al. [J. Phys. Chem. A, 1999, 103, 10921], who determined the N tautomer to decay predominantly via a fast non-radiative process, the limit of the rate of intramolecular excited proton transfer in MHN23 is calculated as, kpt ⩽ 1 × 1012 s-1.
Comparative analysis of multisensor satellite monitoring of Arctic sea-ice
Belchansky, G.I.; Mordvintsev, Ilia N.; Douglas, David C.
1999-01-01
This report represents comparative analysis of nearly coincident Russian OKEAN-01 polar orbiting satellite data, Special Sensor Microwave Imager (SSM/I) and Advanced Very High Resolution Radiometer (AVHRR) imagery. OKEAN-01 ice concentration algorithms utilize active and passive microwave measurements and a linear mixture model for measured values of the brightness temperature and the radar backscatter. SSM/I and AVHRR ice concentrations were computed with NASA Team algorithm and visible and thermal-infrared wavelength AVHRR data, accordingly
Computation of diffuse sky irradiance from multidirectional radiance measurements
NASA Technical Reports Server (NTRS)
Ahmad, Suraiya P.; Middleton, Elizabeth M.; Deering, Donald W.
1987-01-01
Accurate determination of the diffuse solar spectral irradiance directly above the land surface is important in characterizing the reflectance properties of these surfaces, especially vegetation canopies. This determination is also needed to infer the net radiation budget of the earth-atmosphere system above these surfaces. An algorithm is developed here for the computation of hemispheric diffuse irradiance using the measurements from an instrument called PARABOLA, which rapidly measures upwelling and downwelling radiances in three selected wavelength bands. The validity of the algorithm is established from simulations. The standard reference data set of diffuse radiances of Dave (1978), obtained by solving the radiative transfer equation numerically for realistic atmospheric models, is used to simulate PARABOLA radiances. Hemispheric diffuse irradiance is estimated from a subset of simulated radiances by using the algorithm described. The algorithm is validated by comparing the estimated diffuse irradiance with the true diffuse irradiance of the standard data set. The validations include sensitivity studies for two wavelength bands (visible, 0.65-0.67 micron; near infrared, 0.81-0.84 micron), different atmospheric conditions, solar elevations, and surface reflectances. In most cases the hemispheric diffuse irradiance computed from simulated PARABOLA radiances and the true irradiance obtained from radiative transfer calculations agree within 1-2 percent. This technique can be applied to other sampling instruments designed to estimate hemispheric diffuse sky irradiance.
A Dual-Wavelength Radar Technique to Detect Hydrometeor Phases
NASA Technical Reports Server (NTRS)
Liao, Liang; Meneghini, Robert
2016-01-01
This study is aimed at investigating the feasibility of a Ku- and Ka-band space/air-borne dual wavelength radar algorithm to discriminate various phase states of precipitating hydrometeors. A phase-state classification algorithm has been developed from the radar measurements of snow, mixed-phase and rain obtained from stratiform storms. The algorithm, presented in the form of the look-up table that links the Ku-band radar reflectivities and dual-frequency ratio (DFR) to the phase states of hydrometeors, is checked by applying it to the measurements of the Jet Propulsion Laboratory, California Institute of Technology, Airborne Precipitation Radar Second Generation (APR-2). In creating the statistically-based phase look-up table, the attenuation corrected (or true) radar reflectivity factors are employed, leading to better accuracy in determining the hydrometeor phase. In practice, however, the true radar reflectivities are not always available before the phase states of the hydrometeors are determined. Therefore, it is desirable to make use of the measured radar reflectivities in classifying the phase states. To do this, a phase-identification procedure is proposed that uses only measured radar reflectivities. The procedure is then tested using APR-2 airborne radar data. Analysis of the classification results in stratiform rain indicates that the regions of snow, mixed-phase and rain derived from the phase-identification algorithm coincide reasonably well with those determined from the measured radar reflectivities and linear depolarization ratio (LDR).
Zhu, Hongyan; Chu, Bingquan; Fan, Yangyang; Tao, Xiaoya; Yin, Wenxin; He, Yong
2017-08-10
We investigated the feasibility and potentiality of determining firmness, soluble solids content (SSC), and pH in kiwifruits using hyperspectral imaging, combined with variable selection methods and calibration models. The images were acquired by a push-broom hyperspectral reflectance imaging system covering two spectral ranges. Weighted regression coefficients (BW), successive projections algorithm (SPA) and genetic algorithm-partial least square (GAPLS) were compared and evaluated for the selection of effective wavelengths. Moreover, multiple linear regression (MLR), partial least squares regression and least squares support vector machine (LS-SVM) were developed to predict quality attributes quantitatively using effective wavelengths. The established models, particularly SPA-MLR, SPA-LS-SVM and GAPLS-LS-SVM, performed well. The SPA-MLR models for firmness (R pre = 0.9812, RPD = 5.17) and SSC (R pre = 0.9523, RPD = 3.26) at 380-1023 nm showed excellent performance, whereas GAPLS-LS-SVM was the optimal model at 874-1734 nm for predicting pH (R pre = 0.9070, RPD = 2.60). Image processing algorithms were developed to transfer the predictive model in every pixel to generate prediction maps that visualize the spatial distribution of firmness and SSC. Hence, the results clearly demonstrated that hyperspectral imaging has the potential as a fast and non-invasive method to predict the quality attributes of kiwifruits.
Yao, Ke-Han; Jiang, Jehn-Ruey; Tsai, Chung-Hsien; Wu, Zong-Syun
2017-08-20
This paper investigates how to efficiently charge sensor nodes in a wireless rechargeable sensor network (WRSN) with radio frequency (RF) chargers to make the network sustainable. An RF charger is assumed to be equipped with a uniform circular array (UCA) of 12 antennas with the radius λ , where λ is the RF wavelength. The UCA can steer most RF energy in a target direction to charge a specific WRSN node by the beamforming technology. Two evolutionary algorithms (EAs) using the evolution strategy (ES), namely the Evolutionary Beamforming Optimization (EBO) algorithm and the Evolutionary Beamforming Optimization Reseeding (EBO-R) algorithm, are proposed to nearly optimize the power ratio of the UCA beamforming peak side lobe (PSL) and the main lobe (ML) aimed at the given target direction. The proposed algorithms are simulated for performance evaluation and are compared with a related algorithm, called Particle Swarm Optimization Gravitational Search Algorithm-Explore (PSOGSA-Explore), to show their superiority.
Noninvasive hemoglobin measurement using dynamic spectrum
NASA Astrophysics Data System (ADS)
Yi, Xiaoqing; Li, Gang; Lin, Ling
2017-08-01
Spectroscopy methods for noninvasive hemoglobin (Hgb) measurement are interfered by individual difference and particular weak signal. In order to address these problems, we have put forward a series of improvement methods based on dynamic spectrum (DS), including instrument design, spectrum extraction algorithm, and modeling approach. The instrument adopts light sources composed of eight laser diodes with the wavelength range from 600 nm to 1100 nm and records photoplethysmography signals at eight wavelengths synchronously. In order to simplify the optical design, we modulate the light sources with orthogonal square waves and design the corresponding demodulation algorithm, instead of adopting a beam-splitting system. A newly designed algorithm named difference accumulation has been proved to be effective in improving the accuracy of dynamic spectrum extraction. 220 subjects are involved in the clinical experiment. An extreme learning machine calibration model between the DS data and the Hgb levels is established. Correlation coefficient and root-mean-square error of prediction sets are 0.8645 and 8.48 g/l, respectively. The results indicate that the Hgb level can be derived by this approach noninvasively with acceptable precision and accuracy. It is expected to achieve a clinic application in the future.
Research on the control of large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1983-01-01
The research effort on the control of large space structures at the University of Houston has concentrated on the mathematical theory of finite-element models; identification of the mass, damping, and stiffness matrix; assignment of damping to structures; and decoupling of structure dynamics. The objective of the work has been and will continue to be the development of efficient numerical algorithms for analysis, control, and identification of large space structures. The major consideration in the development of the algorithms has been the large number of equations that must be handled by the algorithm as well as sensitivity of the algorithms to numerical errors.
Analysis and an image recovery algorithm for ultrasonic tomography system
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1994-01-01
The problem of an ultrasonic reflectivity tomography is similar to that of a spotlight-mode aircraft Synthetic Aperture Radar (SAR) system. The analysis for a circular path spotlight mode SAR in this paper leads to the insight of the system characteristics. It indicates that such a system when operated in a wide bandwidth is capable of achieving the ultimate resolution; one quarter of the wavelength of the carrier frequency. An efficient processing algorithm based on the exact two dimensional spectrum is presented. The results of simulation indicate that the impulse responses meet the predicted resolution performance. Compared to an algorithm previously developed for the ultrasonic reflectivity tomography, the throughput rate of this algorithm is about ten times higher.
Multi-wavelength approach towards on-product overlay accuracy and robustness
NASA Astrophysics Data System (ADS)
Bhattacharyya, Kaustuve; Noot, Marc; Chang, Hammer; Liao, Sax; Chang, Ken; Gosali, Benny; Su, Eason; Wang, Cathy; den Boef, Arie; Fouquet, Christophe; Huang, Guo-Tsai; Chen, Kai-Hsiung; Cheng, Kevin; Lin, John
2018-03-01
Success of diffraction-based overlay (DBO) technique1,4,5 in the industry is not just for its good precision and low toolinduced shift, but also for the measurement accuracy2 and robustness that DBO can provide. Significant efforts are put in to capitalize on the potential that DBO has to address measurement accuracy and robustness. Introduction of many measurement wavelength choices (continuous wavelength) in DBO is one of the key new capabilities in this area. Along with the continuous choice of wavelengths, the algorithms (fueled by swing-curve physics) on how to use these wavelengths are of high importance for a robust recipe setup that can avoid the impact from process stack variations (symmetric as well as asymmetric). All these are discussed. Moreover, another aspect of boosting measurement accuracy and robustness is discussed that deploys the capability to combine overlay measurement data from multiple wavelength measurements. The goal is to provide a method to make overlay measurements immune from process stack variations and also to report health KPIs for every measurement. By combining measurements from multiple wavelengths, a final overlay measurement is generated. The results show a significant benefit in accuracy and robustness against process stack variation. These results are supported by both measurement data as well as simulation from many product stacks.
Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique
NASA Astrophysics Data System (ADS)
Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua
2018-05-01
A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.
NASA Astrophysics Data System (ADS)
Wang, Cheng; Wang, Hongxiang; Ji, Yuefeng
2018-01-01
In this paper, a multi-bit wavelength coding phase-shift-keying (PSK) optical steganography method is proposed based on amplified spontaneous emission noise and wavelength selection switch. In this scheme, the assignment codes and the delay length differences provide a large two-dimensional key space. A 2-bit wavelength coding PSK system is simulated to show the efficiency of our proposed method. The simulated results demonstrate that the stealth signal after encoded and modulated is well-hidden in both time and spectral domains, under the public channel and noise existing in the system. Besides, even the principle of this scheme and the existence of stealth channel are known to the eavesdropper, the probability of recovering the stealth data is less than 0.02 if the key is unknown. Thus it can protect the security of stealth channel more effectively. Furthermore, the stealth channel will results in 0.48 dB power penalty to the public channel at 1 × 10-9 bit error rate, and the public channel will have no influence on the receiving of the stealth channel.
Deriving flow directions for coarse-resolution (1-4 km) gridded hydrologic modeling
NASA Astrophysics Data System (ADS)
Reed, Seann M.
2003-09-01
The National Weather Service Hydrology Laboratory (NWS-HL) is currently testing a grid-based distributed hydrologic model at a resolution (4 km) commensurate with operational, radar-based precipitation products. To implement distributed routing algorithms in this framework, a flow direction must be assigned to each model cell. A new algorithm, referred to as cell outlet tracing with an area threshold (COTAT) has been developed to automatically, accurately, and efficiently assign flow directions to any coarse-resolution grid cells using information from any higher-resolution digital elevation model. Although similar to previously published algorithms, this approach offers some advantages. Use of an area threshold allows more control over the tendency for producing diagonal flow directions. Analyses of results at different output resolutions ranging from 300 m to 4000 m indicate that it is possible to choose an area threshold that will produce minimal differences in average network flow lengths across this range of scales. Flow direction grids at a 4 km resolution have been produced for the conterminous United States.
NASA Astrophysics Data System (ADS)
Zhang, Jun-You; Qi, Hong; Ren, Ya-Tao; Ruan, Li-Ming
2018-04-01
An accurate and stable identification technique is developed to retrieve the optical constants and particle size distributions (PSDs) of particle system simultaneously from the multi-wavelength scattering-transmittance signals by using the improved quantum particle swarm optimization algorithm. The Mie theory are selected to calculate the directional laser intensity scattered by particles and the spectral collimated transmittance. The sensitivity and objective function distribution analysis were conducted to evaluate the mathematical properties (i.e. ill-posedness and multimodality) of the inverse problems under three different optical signals combinations (i.e. the single-wavelength multi-angle light scattering signal, the single-wavelength multi-angle light scattering and spectral transmittance signal, and the multi-angle light scattering and spectral transmittance signal). It was found the best global convergence performance can be obtained by using the multi-wavelength scattering-transmittance signals. Meanwhile, the present technique have been tested under different Gaussian measurement noise to prove its feasibility in a large solution space. All the results show that the inverse technique by using multi-wavelength scattering-transmittance signals is effective and suitable for retrieving the optical complex refractive indices and PSD of particle system simultaneously.
ERIC Educational Resources Information Center
Carrell, Scott E.; Sacerdote, Bruce I.; West, James E.
2011-01-01
We take cohorts of entering freshmen at the United States Air Force Academy and assign half to peer groups with the goal of maximizing the academic performance of the lowest ability students. Our assignment algorithm uses peer effects estimates from the observational data. We find a negative and significant treatment effect for the students we…
Discovery of New Coronal Lines at 2.843 and 2.853 μm
NASA Astrophysics Data System (ADS)
Samra, Jenna E.; Judge, Philip G.; DeLuca, Edward E.; Hannigan, James W.
2018-04-01
Two new emission features were observed during the 2017 August 21 total solar eclipse by a novel spectrometer, the Airborne Infrared Spectrometer (AIR-Spec), flown at 14.3 km altitude aboard the NCAR Gulfstream-V aircraft. We derive wavelengths in air of 2.8427 ± 0.00009 μm and 2.8529 ± 0.00008 μm. One of these lines belongs to the 3{{{p}}}53{{d}}{}3{{{F}}}3^\\circ \\to 3{{{p}}}53{{d}}{}3{{{F}}}4^\\circ transition in Ar-like Fe IX. This appears to be the first detection of this transition from any source. Minimization of residual wavelength differences using both measured wavelengths, together with National Institute of Standards and Technology (NIST) extreme ultraviolet wavelengths, does not clearly favor assignment to Fe IX. However, the shorter wavelength line appears more consistent with other observed features formed at similar temperatures to Fe IX. The transition occurs between two levels within the excited 3{{{p}}}53{{d}} configuration, 429,000 cm‑1 above the ground level. The line is therefore absent in photo-ionized coronal-line astrophysical sources such as the Circinus Galaxy. Data from a Fourier transform interferometer (FTIR) deployed from Wyoming show that both lines are significantly attenuated by telluric H2O, even at dry sites. We have been unable to identify the longer wavelength transition.
Multi-Agent Task Negotiation Among UAVs to Defend Against Swarm Attacks
2012-03-01
are based on economic models [39]. Auction methods of task coordination also attempt to deal with agents dealing with noisy, dynamic environments...August 2006. [34] M. Alighanbari, “ Robust and decentralized task assignment algorithms for uavs,” Ph.D. dissertation, Massachusetts Institute of Technology...Implicit Coordination . . . . . . . . . . . . . 12 2.4 Decentralized Algorithm B - Market- Based . . . . . . . . . . . . . . . . 12 2.5 Decentralized
iNJclust: Iterative Neighbor-Joining Tree Clustering Framework for Inferring Population Structure.
Limpiti, Tulaya; Amornbunchornvej, Chainarong; Intarapanich, Apichart; Assawamakin, Anunchai; Tongsima, Sissades
2014-01-01
Understanding genetic differences among populations is one of the most important issues in population genetics. Genetic variations, e.g., single nucleotide polymorphisms, are used to characterize commonality and difference of individuals from various populations. This paper presents an efficient graph-based clustering framework which operates iteratively on the Neighbor-Joining (NJ) tree called the iNJclust algorithm. The framework uses well-known genetic measurements, namely the allele-sharing distance, the neighbor-joining tree, and the fixation index. The behavior of the fixation index is utilized in the algorithm's stopping criterion. The algorithm provides an estimated number of populations, individual assignments, and relationships between populations as outputs. The clustering result is reported in the form of a binary tree, whose terminal nodes represent the final inferred populations and the tree structure preserves the genetic relationships among them. The clustering performance and the robustness of the proposed algorithm are tested extensively using simulated and real data sets from bovine, sheep, and human populations. The result indicates that the number of populations within each data set is reasonably estimated, the individual assignment is robust, and the structure of the inferred population tree corresponds to the intrinsic relationships among populations within the data.
A Genetic Algorithm for the Bi-Level Topological Design of Local Area Networks
Camacho-Vallejo, José-Fernando; Mar-Ortiz, Julio; López-Ramos, Francisco; Rodríguez, Ricardo Pedraza
2015-01-01
Local access networks (LAN) are commonly used as communication infrastructures which meet the demand of a set of users in the local environment. Usually these networks consist of several LAN segments connected by bridges. The topological LAN design bi-level problem consists on assigning users to clusters and the union of clusters by bridges in order to obtain a minimum response time network with minimum connection cost. Therefore, the decision of optimally assigning users to clusters will be made by the leader and the follower will make the decision of connecting all the clusters while forming a spanning tree. In this paper, we propose a genetic algorithm for solving the bi-level topological design of a Local Access Network. Our solution method considers the Stackelberg equilibrium to solve the bi-level problem. The Stackelberg-Genetic algorithm procedure deals with the fact that the follower’s problem cannot be optimally solved in a straightforward manner. The computational results obtained from two different sets of instances show that the performance of the developed algorithm is efficient and that it is more suitable for solving the bi-level problem than a previous Nash-Genetic approach. PMID:26102502
Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A
2014-09-22
We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.
An Efficient Image Recovery Algorithm for Diffraction Tomography Systems
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1993-01-01
A diffraction tomography system has potential application in ultrasonic medical imaging area. It is capable of achieving imagery with the ultimate resolution of one quarter the wavelength by collecting ultrasonic backscattering data from a circular array of sensors and reconstructing the object reflectivity using a digital image recovery algorithm performed by a computer. One advantage of such a system is that is allows a relatively lower frequency wave to penetrate more deeply into the object and still achieve imagery with a reasonable resolution. An efficient image recovery algorithm for the diffraction tomography system was originally developed for processing a wide beam spaceborne SAR data...
NASA Astrophysics Data System (ADS)
Shirazi, Abolfazl
2016-10-01
This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.
TaDb: A time-aware diffusion-based recommender algorithm
NASA Astrophysics Data System (ADS)
Li, Wen-Jun; Xu, Yuan-Yuan; Dong, Qiang; Zhou, Jun-Lin; Fu, Yan
2015-02-01
Traditional recommender algorithms usually employ the early and recent records indiscriminately, which overlooks the change of user interests over time. In this paper, we show that the interests of a user remain stable in a short-term interval and drift during a long-term period. Based on this observation, we propose a time-aware diffusion-based (TaDb) recommender algorithm, which assigns different temporal weights to the leading links existing before the target user's collection and the following links appearing after that in the diffusion process. Experiments on four real datasets, Netflix, MovieLens, FriendFeed and Delicious show that TaDb algorithm significantly improves the prediction accuracy compared with the algorithms not considering temporal effects.
Implications of a wavelength-dependent PSF for weak lensing measurements
NASA Astrophysics Data System (ADS)
Eriksen, Martin; Hoekstra, Henk
2018-07-01
The convolution of galaxy images by the point spread function (PSF) is the dominant source of bias for weak gravitational lensing studies, and an accurate estimate of the PSF is required to obtain unbiased shape measurements. The PSF estimate for a galaxy depends on its spectral energy distribution (SED), because the instrumental PSF is generally a function of the wavelength. In this paper we explore various approaches to determine the resulting `effective' PSF using broad-band data. Considering the Euclid mission as a reference, we find that standard SED template fitting methods result in biases that depend on source redshift, although this may be remedied if the algorithms can be optimized for this purpose. Using a machine learning algorithm we show that, at least in principle, the required accuracy can be achieved with the current survey parameters. It is also possible to account for the correlations between photometric redshift and PSF estimates that arise from the use of the same photometry. We explore the impact of errors in photometric calibration, errors in the assumed wavelength dependence of the PSF model, and limitations of the adopted template libraries. Our results indicate that the required accuracy for Euclid can be achieved using the data that are planned to determine photometric redshifts.
Fiber Bragg grating sensor for fault detection in high voltage overhead transmission lines
NASA Astrophysics Data System (ADS)
Moghadas, Amin
2011-12-01
A fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by fiber Bragg grating (FBG) sensors. The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signals. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG sensors and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system.
Multiple diagnosis based on photoplethysmography: hematocrit, SpO2, pulse, and respiration
NASA Astrophysics Data System (ADS)
Yoon, Gilwon; Lee, Jong Y.; Jeon, Kye Jin; Park, Kun-Kook; Yeo, Hyung S.; Hwang, Hyun T.; Kim, Hong S.; Hwang, In-Duk
2002-09-01
Photo-plethysmography measures pulsatile blood flow in real-time and non-invasively. One of widely known applications of PPG is the measurement of saturated oxygen in arterial blood(SpO2). In our work, using several wavelengths more than those used in a pulse oximeter, an algorithm and instrument have been developed to measure hematocrit, saturated oxygen, pulse and respiratory rates simultaneously. To predict hematocrit, a dedicated algorithm is developed based on scattering of RBC and a protocol for detecting outlier signals is used to increase accuracy and reliability. Digital filtering techniques are used to extract respiratory rate signals. Utilization of wavelengths under 1000nm and a multi-wavelength LED array chip and digital-oriented electronics enable us to make a compact device. Our preliminary clinical trials show that the achieved percent errors are +/-8.2% for hematocrit when tested with 594 persons, R2 for SpO2 fitting is 0.99985 when tested with a Bi-Tek pulse oximeter simulator and the SpO2 error for in vivo test is +/-2.5% over the range of 75~100%. The error of pulse rates is less than +/-5%. We obtained a positive predictive value of 96% for respiratory rates in qualitative analysis.
NASA Astrophysics Data System (ADS)
Lin, Z. D.; Wang, Y. B.; Wang, R. J.; Wang, L. S.; Lu, C. P.; Zhang, Z. Y.; Song, L. T.; Liu, Y.
2017-07-01
A total of 130 topsoil samples collected from Guoyang County, Anhui Province, China, were used to establish a Vis-NIR model for the prediction of organic matter content (OMC) in lime concretion black soils. Different spectral pretreatments were applied for minimizing the irrelevant and useless information of the spectra and increasing the spectra correlation with the measured values. Subsequently, the Kennard-Stone (KS) method and sample set partitioning based on joint x-y distances (SPXY) were used to select the training set. Successive projection algorithm (SPA) and genetic algorithm (GA) were then applied for wavelength optimization. Finally, the principal component regression (PCR) model was constructed, in which the optimal number of principal components was determined using the leave-one-out cross validation technique. The results show that the combination of the Savitzky-Golay (SG) filter for smoothing and multiplicative scatter correction (MSC) can eliminate the effect of noise and baseline drift; the SPXY method is preferable to KS in the sample selection; both the SPA and the GA can significantly reduce the number of wavelength variables and favorably increase the accuracy, especially GA, which greatly improved the prediction accuracy of soil OMC with Rcc, RMSEP, and RPD up to 0.9316, 0.2142, and 2.3195, respectively.
Removal of atmospheric effects from satellite imagery of the oceans.
Gordon, H R
1978-05-15
In attempting to observe the color of the ocean from satellites, it is necessary to remove the effects of atmospheric and sea surface scattering from the upward radiance at high altitude in order to observe only those photons which were backscattered out of the ocean and hence contain information about subsurface conditions. The observations that (1) the upward radiance from the unwanted photons can be divided into those resulting from Rayleigh scattering alone and those resulting from aerosol scattering alone, (2) the aerosol scattering phase function should be nearly independent of wavelength, and (3) the Rayleigh component can be computed without a knowledge of the sea surface roughness are combined to yield an algorithm for removing a large portion of this unwanted radiance from satellite imagery of the ocean. It is assumed that the ocean is totally absorbing in a band of wavelengths around 750 nm and shown that application of the proposed algorithm to correct the radiance at a wavelength lambda requires only the ratio () of the aerosol optical thickness at lambda to that at about 750 nm. The accuracy to which the correction can be made as a function of the accuracy to which can be found is in detail. A possible method of finding from satellite measurements alone is suggested.
Implications of a wavelength dependent PSF for weak lensing measurements.
NASA Astrophysics Data System (ADS)
Eriksen, Martin; Hoekstra, Henk
2018-05-01
The convolution of galaxy images by the point-spread function (PSF) is the dominant source of bias for weak gravitational lensing studies, and an accurate estimate of the PSF is required to obtain unbiased shape measurements. The PSF estimate for a galaxy depends on its spectral energy distribution (SED), because the instrumental PSF is generally a function of the wavelength. In this paper we explore various approaches to determine the resulting `effective' PSF using broad-band data. Considering the Euclid mission as a reference, we find that standard SED template fitting methods result in biases that depend on source redshift, although this may be remedied if the algorithms can be optimised for this purpose. Using a machine-learning algorithm we show that, at least in principle, the required accuracy can be achieved with the current survey parameters. It is also possible to account for the correlations between photometric redshift and PSF estimates that arise from the use of the same photometry. We explore the impact of errors in photometric calibration, errors in the assumed wavelength dependence of the PSF model and limitations of the adopted template libraries. Our results indicate that the required accuracy for Euclid can be achieved using the data that are planned to determine photometric redshifts.
Fiber Bragg Grating Sensor for Fault Detection in Radial and Network Transmission Lines
Moghadas, Amin A.; Shadaram, Mehdi
2010-01-01
In this paper, a fiber optic based sensor capable of fault detection in both radial and network overhead transmission power line systems is investigated. Bragg wavelength shift is used to measure the fault current and detect fault in power systems. Magnetic fields generated by currents in the overhead transmission lines cause a strain in magnetostrictive material which is then detected by Fiber Bragg Grating (FBG). The Fiber Bragg interrogator senses the reflected FBG signals, and the Bragg wavelength shift is calculated and the signals are processed. A broadband light source in the control room scans the shift in the reflected signal. Any surge in the magnetic field relates to an increased fault current at a certain location. Also, fault location can be precisely defined with an artificial neural network (ANN) algorithm. This algorithm can be easily coordinated with other protective devices. It is shown that the faults in the overhead transmission line cause a detectable wavelength shift on the reflected signal of FBG and can be used to detect and classify different kind of faults. The proposed method has been extensively tested by simulation and results confirm that the proposed scheme is able to detect different kinds of fault in both radial and network system. PMID:22163416
Detection of large-scale concentric gravity waves from a Chinese airglow imager network
NASA Astrophysics Data System (ADS)
Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao
2018-06-01
Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.
An adaptive grid algorithm for 3-D GIS landform optimization based on improved ant algorithm
NASA Astrophysics Data System (ADS)
Wu, Chenhan; Meng, Lingkui; Deng, Shijun
2005-07-01
The key technique of 3-D GIS is to realize quick and high-quality 3-D visualization, in which 3-D roaming system based on landform plays an important role. However how to increase efficiency of 3-D roaming engine and process a large amount of landform data is a key problem in 3-D landform roaming system and improper process of the problem would result in tremendous consumption of system resources. Therefore it has become the key of 3-D roaming system design that how to realize high-speed process of distributed data for landform DEM (Digital Elevation Model) and high-speed distributed modulation of various 3-D landform data resources. In the paper we improved the basic ant algorithm and designed the modulation strategy of 3-D GIS landform resources based on the improved ant algorithm. By initially hypothetic road weights σi , the change of the information factors in the original algorithm would transform from ˜τj to ∆τj+σi and the weights was decided by 3-D computative capacity of various nodes in network environment. So during the course of initial phase of task assignment, increasing the resource information factors of high task-accomplishing rate and decreasing ones of low accomplishing rate would make load accomplishing rate approach the same value as quickly as possible, then in the later process of task assignment, the load balanced ability of the system was further improved. Experimental results show by improving ant algorithm, our system not only decreases many disadvantage of the traditional ant algorithm, but also like ants looking for food effectively distributes the complicated landform algorithm to many computers to process cooperatively and gains a satisfying search result.
Kunenkov, Erast V; Kononikhin, Alexey S; Perminova, Irina V; Hertkorn, Norbert; Gaspar, Andras; Schmitt-Kopplin, Philippe; Popov, Igor A; Garmash, Andrew V; Nikolaev, Evgeniy N
2009-12-15
The ultrahigh-resolution Fourier transform ion cyclotron resonance (FTICR) mass spectrum of natural organic matter (NOM) contains several thousand peaks with dozens of molecules matching the same nominal mass. Such a complexity poses a significant challenge for automatic data interpretation, in which the most difficult task is molecular formula assignment, especially in the case of heavy and/or multielement ions. In this study, a new universal algorithm for automatic treatment of FTICR mass spectra of NOM and humic substances based on total mass difference statistics (TMDS) has been developed and implemented. The algorithm enables a blind search for unknown building blocks (instead of a priori known ones) by revealing repetitive patterns present in spectra. In this respect, it differs from all previously developed approaches. This algorithm was implemented in designing FIRAN-software for fully automated analysis of mass data with high peak density. The specific feature of FIRAN is its ability to assign formulas to heavy and/or multielement molecules using "virtual elements" approach. To verify the approach, it was used for processing mass spectra of sodium polystyrene sulfonate (PSS, M(w) = 2200 Da) and polymethacrylate (PMA, M(w) = 3290 Da) which produce heavy multielement and multiply-charged ions. Application of TMDS identified unambiguously monomers present in the polymers consistent with their structure: C(8)H(7)SO(3)Na for PSS and C(4)H(6)O(2) for PMA. It also allowed unambiguous formula assignment to all multiply-charged peaks including the heaviest peak in PMA spectrum at mass 4025.6625 with charge state 6- (mass bias -0.33 ppm). Application of the TMDS-algorithm to processing data on the Suwannee River FA has proven its unique capacities in analysis of spectra with high peak density: it has not only identified the known small building blocks in the structure of FA such as CH(2), H(2), C(2)H(2)O, O but the heavier unit at 154.027 amu. The latter was identified for the first time and assigned a formula C(7)H(6)O(4) consistent with the structure of dihydroxyl-benzoic acids. The presence of these compounds in the structure of FA has so far been numerically suggested but never proven directly. It was concluded that application of the TMDS-algorithm opens new horizons in unfolding molecular complexity of NOM and other natural products.
Multi-PON access network using a coarse AWG for smooth migration from TDM to WDM PON
NASA Astrophysics Data System (ADS)
Shachaf, Y.; Chang, C.-H.; Kourtessis, P.; Senior, J. M.
2007-06-01
An interoperable access network architecture based on a coarse array waveguide grating (AWG) is described, displaying dynamic wavelength assignment to manage the network load across multiple PONs. The multi-PON architecture utilizes coarse Gaussian channels of an AWG to facilitate scalability and smooth migration path between TDM and WDM PONs. Network simulations of a cross-operational protocol platform confirmed successful routing of individual PON clusters through 7 nm-wide passband windows of the AWG. Furthermore, polarization-dependent wavelength shift and phase errors of the device proved not to impose restrain on the routing performance. Optical transmission tests at 2.5 Gbit/s for distances up to 20 km are demonstrated.
The spectroscopic observation of the CH radical in its a4Sigma(-) state
NASA Technical Reports Server (NTRS)
Nelis, Thomas; Brown, John M.; Evenson, Kenneth M.
1988-01-01
The first spectroscopic observation of CH in the a 4Sigma(0-) state are reported. The molecule was generated in a discharge-flow system in the reaction betweeen fluorine atoms and methane or between oxygen atoms and acetylene at a total pressure of about 1 Torr. Several resonances associated with the N = 1 - 0 transitions of 4Sigma(-) CH were observed at three separate laser wavelengths, while those for the N = 2 - 1 transition were observed at two wavelengths. Each observed Zeeman component consists of a well-split doublet arising from proton hyperfine structure. The reasons for assigning the observations to CH in its a 4Sigma(-) state are discussed.
A new algorithm using cross-assignment for label-free quantitation with LC-LTQ-FT MS.
Andreev, Victor P; Li, Lingyun; Cao, Lei; Gu, Ye; Rejtar, Tomas; Wu, Shiaw-Lin; Karger, Barry L
2007-06-01
A new algorithm is described for label-free quantitation of relative protein abundances across multiple complex proteomic samples. Q-MEND is based on the denoising and peak picking algorithm, MEND, previously developed in our laboratory. Q-MEND takes advantage of the high resolution and mass accuracy of the hybrid LTQ-FT MS mass spectrometer (or other high-resolution mass spectrometers, such as a Q-TOF MS). The strategy, termed "cross-assignment", is introduced to increase substantially the number of quantitated proteins. In this approach, all MS/MS identifications for the set of analyzed samples are combined into a master ID list, and then each LC-MS run is searched for the features that can be assigned to a specific identification from that master list. The reliability of quantitation is enhanced by quantitating separately all peptide charge states, along with a scoring procedure to filter out less reliable peptide abundance measurements. The effectiveness of Q-MEND is illustrated in the relative quantitative analysis of Escherichia coli samples spiked with known amounts of non-E. coli protein digests. A mean quantitation accuracy of 7% and mean precision of 15% is demonstrated. Q-MEND can perform relative quantitation of a set of LC-MS data sets without manual intervention and can generate files compatible with the Guidelines for Proteomic Data Publication.
Leveraging search and content exploration by exploiting context in folksonomy systems
NASA Astrophysics Data System (ADS)
Abel, Fabian; Baldoni, Matteo; Baroglio, Cristina; Henze, Nicola; Kawase, Ricardo; Krause, Daniel; Patti, Viviana
2010-04-01
With the advent of Web 2.0 tagging became a popular feature in social media systems. People tag diverse kinds of content, e.g. products at Amazon, music at Last.fm, images at Flickr, etc. In the last years several researchers analyzed the impact of tags on information retrieval. Most works focused on tags only and ignored context information. In this article we present context-aware approaches for learning semantics and improve personalized information retrieval in tagging systems. We investigate how explorative search, initialized by clicking on tags, can be enhanced with automatically produced context information so that search results better fit to the actual information needs of the users. We introduce the SocialHITS algorithm and present an experiment where we compare different algorithms for ranking users, tags, and resources in a contextualized way. We showcase our approaches in the domain of images and present the TagMe! system that enables users to explore and tag Flickr pictures. In TagMe! we further demonstrate how advanced context information can easily be generated: TagMe! allows users to attach tag assignments to a specific area within an image and to categorize tag assignments. In our corresponding evaluation we show that those additional facets of tag assignments gain valuable semantics, which can be applied to improve existing search and ranking algorithms significantly.
Russ, Daniel E; Ho, Kwan-Yuet; Colt, Joanne S; Armenti, Karla R; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P; Karagas, Margaret R; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T; Johnson, Calvin A; Friesen, Melissa C
2016-06-01
Mapping job titles to standardised occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiological studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14 983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in 2 occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. For 11 991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6-digit and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (κ 0.6-0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiological studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Song, JooBong; Lee, Chaiwoo; Lee, WonJung; Bahn, Sangwoo; Jung, ChanJu; Yun, Myung Hwan
2015-01-01
For the successful implementation of job rotation, jobs should be scheduled systematically so that physical workload is evenly distributed with the use of various body parts. However, while the potential benefits are widely recognized by research and industry, there is still a need for a more effective and efficient algorithm that considers multiple work-related factors in job rotation scheduling. This study suggests a type of job rotation algorithm that aims to minimize musculoskeletal disorders with the approach of decreasing the overall workload. Multiple work characteristics are evaluated as inputs to the proposed algorithm. Important factors, such as physical workload on specific body parts, working height, involvement of heavy lifting, and worker characteristics such as physical disorders, are included in the algorithm. For evaluation of the overall workload in a given workplace, an objective function was defined to aggregate the scores from the individual factors. A case study, where the algorithm was applied at a workplace, is presented with an examination on its applicability and effectiveness. With the application of the suggested algorithm in case study, the value of the final objective function, which is the weighted sum of the workload in various body parts, decreased by 71.7% when compared to a typical sequential assignment and by 84.9% when compared to a single job assignment, which is doing one job all day. An algorithm was developed using the data from the ergonomic evaluation tool used in the plant and from the known factors related to workload. The algorithm was developed so that it can be efficiently applied with a small amount of required inputs, while covering a wide range of work-related factors. A case study showed that the algorithm was beneficial in determining a job rotation schedule aimed at minimizing workload across body parts.
ERIC Educational Resources Information Center
Jackson, C. Kirabo
2011-01-01
Existing studies on single-sex schooling suffer from biases due to student selection to schools and single-sex schools being better in unmeasured ways. In Trinidad and Tobago students are assigned to secondary schools based on an algorithm allowing one to address self-selection bias and cleanly estimate an upper-bound single-sex school effect. The…
Scheduling Jobs and a Variable Maintenance on a Single Machine with Common Due-Date Assignment
Wan, Long
2014-01-01
We investigate a common due-date assignment scheduling problem with a variable maintenance on a single machine. The goal is to minimize the total earliness, tardiness, and due-date cost. We derive some properties on an optimal solution for our problem. For a special case with identical jobs we propose an optimal polynomial time algorithm followed by a numerical example. PMID:25147861
Methods for coherent lensless imaging and X-ray wavefront measurements
NASA Astrophysics Data System (ADS)
Guizar Sicairos, Manuel
X-ray diffractive imaging is set apart from other high-resolution imaging techniques (e.g. scanning electron or atomic force microscopy) for its high penetration depth, which enables tomographic 3D imaging of thick samples and buried structures. Furthermore, using short x-ray pulses, it enables the capability to take ultrafast snapshots, giving a unique opportunity to probe nanoscale dynamics at femtosecond time scales. In this thesis we present improvements to phase retrieval algorithms, assess their performance through numerical simulations, and develop new methods for both imaging and wavefront measurement. Building on the original work by Faulkner and Rodenburg, we developed an improved reconstruction algorithm for phase retrieval with transverse translations of the object relative to the illumination beam. Based on gradient-based nonlinear optimization, this algorithm is capable of estimating the object, and at the same time refining the initial knowledge of the incident illumination and the object translations. The advantages of this algorithm over the original iterative transform approach are shown through numerical simulations. Phase retrieval has already shown substantial success in wavefront sensing at optical wavelengths. Although in principle the algorithms can be used at any wavelength, in practice the focus-diversity mechanism that makes optical phase retrieval robust is not practical to implement for x-rays. In this thesis we also describe the novel application of phase retrieval with transverse translations to the problem of x-ray wavefront sensing. This approach allows the characterization of the complex-valued x-ray field in-situ and at-wavelength and has several practical and algorithmic advantages over conventional focused beam measurement techniques. A few of these advantages include improved robustness through diverse measurements, reconstruction from far-field intensity measurements only, and significant relaxation of experimental requirements over other beam characterization approaches. Furthermore, we show that a one-dimensional version of this technique can be used to characterize an x-ray line focus produced by a cylindrical focusing element. We provide experimental demonstrations of the latter at hard x-ray wavelengths, where we have characterized the beams focused by a kinoform lens and an elliptical mirror. In both experiments the reconstructions exhibited good agreement with independent measurements, and in the latter a small mirror misalignment was inferred from the phase retrieval reconstruction. These experiments pave the way for the application of robust phase retrieval algorithms for in-situ alignment and performance characterization of x-ray optics for nanofocusing. We also present a study on how transverse translations help with the well-known uniqueness problem of one-dimensional phase retrieval. We also present a novel method for x-ray holography that is capable of reconstructing an image using an off-axis extended reference in a non-iterative computation, greatly generalizing an earlier approach by Podorov et al. The approach, based on the numerical application of derivatives on the field autocorrelation, was developed from first mathematical principles. We conducted a thorough theoretical study to develop technical and intuitive understanding of this technique and derived sufficient separation conditions required for an artifact-free reconstruction. We studied the effects of missing information in the Fourier domain, and of an imperfect reference, and we provide a signal-to-noise ratio comparison with the more traditional approach of Fourier transform holography. We demonstrated this new holographic approach through proof-of-principle optical experiments and later experimentally at soft x-ray wavelengths, where we compared its performance to Fourier transform holography, iterative phase retrieval and state-of-the-art zone-plate x-ray imaging techniques (scanning and full-field). Finally, we present a demonstration of the technique using a single 20 fs pulse from a high-harmonic table-top source. Holography with an extended reference is shown to provide fast, good quality images that are robust to noise and artifacts that arise from missing information due to a beam stop. (Abstract shortened by UMI.)
On Maximizing the Lifetime of Wireless Sensor Networks by Optimally Assigning Energy Supplies
Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; Gonzalez-Castaño, Francisco Javier
2013-01-01
The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively. PMID:23939582
Hierarchical auto-configuration addressing in mobile ad hoc networks (HAAM)
NASA Astrophysics Data System (ADS)
Ram Srikumar, P.; Sumathy, S.
2017-11-01
Addressing plays a vital role in networking to identify devices uniquely. A device must be assigned with a unique address in order to participate in the data communication in any network. Different protocols defining different types of addressing are proposed in literature. Address auto-configuration is a key requirement for self organizing networks. Existing auto-configuration based addressing protocols require broadcasting probes to all the nodes in the network before assigning a proper address to a new node. This needs further broadcasts to reflect the status of the acquired address in the network. Such methods incur high communication overheads due to repetitive flooding. To address this overhead, a new partially stateful address allocation scheme, namely Hierarchical Auto-configuration Addressing (HAAM) scheme is extended and proposed. Hierarchical addressing basically reduces latency and overhead caused during address configuration. Partially stateful addressing algorithm assigns addresses without the need for flooding and global state awareness, which in turn reduces the communication overhead and space complexity respectively. Nodes are assigned addresses hierarchically to maintain the graph of the network as a spanning tree which helps in effectively avoiding the broadcast storm problem. Proposed algorithm for HAAM handles network splits and merges efficiently in large scale mobile ad hoc networks incurring low communication overheads.
On maximizing the lifetime of Wireless Sensor Networks by optimally assigning energy supplies.
Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; González-Castano, Francisco Javier
2013-08-09
The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively.
QAPgrid: A Two Level QAP-Based Approach for Large-Scale Data Analysis and Visualization
Inostroza-Ponta, Mario; Berretta, Regina; Moscato, Pablo
2011-01-01
Background The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain “hidden regularities” and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. Methodology/Principal Findings We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Conclusions/Significance Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics. PMID:21267077
QAPgrid: a two level QAP-based approach for large-scale data analysis and visualization.
Inostroza-Ponta, Mario; Berretta, Regina; Moscato, Pablo
2011-01-18
The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain "hidden regularities" and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics.
Two Different Approaches to Automated Mark Up of Emotions in Text
NASA Astrophysics Data System (ADS)
Francisco, Virginia; Hervás, Raqucl; Gervás, Pablo
This paper presents two different approaches to automated marking up of texts with emotional labels. For the first approach a corpus of example texts previously annotated by human evaluators is mined for an initial assignment of emotional features to words. This results in a List of Emotional Words (LEW) which becomes a useful resource for later automated mark up. The mark up algorithm in this first approach mirrors closely the steps taken during feature extraction, employing for the actual assignment of emotional features a combination of the LEW resource and WordNet for knowledge-based expansion of words not occurring in LEW. The algorithm for automated mark up is tested against new text samples to test its coverage. The second approach mark up texts during their generation. We have a knowledge base which contains the necessary information for marking up the text. This information is related to actions and characters. The algorithm in this case employ the information of the knowledge database and decides the correct emotion for every sentence. The algorithm for automated mark up is tested against four different texts. The results of the two approaches are compared and discussed with respect to three main issues: relative adequacy of each one of the representations used, correctness and coverage of the proposed algorithms, and additional techniques and solutions that may be employed to improve the results.
NASA Astrophysics Data System (ADS)
Moon, Byung-Young
2005-12-01
The hybrid neural-genetic multi-model parameter estimation algorithm was demonstrated. This method can be applied to structured system identification of electro-hydraulic servo system. This algorithms consist of a recurrent incremental credit assignment(ICRA) neural network and a genetic algorithm. The ICRA neural network evaluates each member of a generation of model and genetic algorithm produces new generation of model. To evaluate the proposed method, electro-hydraulic servo system was designed and manufactured. The experiment was carried out to figure out the hybrid neural-genetic multi-model parameter estimation algorithm. As a result, the dynamic characteristics were obtained such as the parameters(mass, damping coefficient, bulk modulus, spring coefficient), which minimize total square error. The result of this study can be applied to hydraulic systems in industrial fields.
Computer program documentation: ISOCLS iterative self-organizing clustering program, program C094
NASA Technical Reports Server (NTRS)
Minter, R. T. (Principal Investigator)
1972-01-01
The author has identified the following significant results. This program implements an algorithm which, ideally, sorts a given set of multivariate data points into similar groups or clusters. The program is intended for use in the evaluation of multispectral scanner data; however, the algorithm could be used for other data types as well. The user may specify a set of initial estimated cluster means to begin the procedure, or he may begin with the assumption that all the data belongs to one cluster. The procedure is initiatized by assigning each data point to the nearest (in absolute distance) cluster mean. If no initial cluster means were input, all of the data is assigned to cluster 1. The means and standard deviations are calculated for each cluster.
Customizing FP-growth algorithm to parallel mining with Charm++ library
NASA Astrophysics Data System (ADS)
Puścian, Marek
2017-08-01
This paper presents a frequent item mining algorithm that was customized to handle growing data repositories. The proposed solution applies Master Slave scheme to frequent pattern growth technique. Efficient utilization of available computation units is achieved by dynamic reallocation of tasks. Conditional frequent trees are assigned to parallel workers basing on their workload. Proposed enhancements have been successfully implemented using Charm++ library. This paper discusses results of the performance of parallelized FP-growth algorithm against different datasets. The approach has been illustrated with many experiments and measurements performed using multiprocessor and multithreaded computer.
A Spiking Neural Network in sEMG Feature Extraction.
Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor
2015-11-03
We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control.
MATSurv: multisensor air traffic surveillance system
NASA Astrophysics Data System (ADS)
Yeddanapudi, Murali; Bar-Shalom, Yaakov; Pattipati, Krishna R.; Gassner, Richard R.
1995-09-01
This paper deals with the design and implementation of MATSurv 1--an experimental Multisensor Air Traffic Surveillance system. The proposed system consists of a Kalman filter based state estimator used in conjunction with a 2D sliding window assignment algorithm. Real data from two FAA radars is used to evaluate the performance of this algorithm. The results indicate that the proposed algorithm provides a superior classification of the measurements into tracks (i.e., the most likely aircraft trajectories) when compared to the aircraft trajectories obtained using the measurement IDs (squawk or IFF code).
NASA Astrophysics Data System (ADS)
Kupinski, Meredith; Rehbinder, Jean; Haddad, Huda; Deby, Stanislas; Vizet, Jérémy; Teig, Benjamin; Nazac, André; Pierangelo, Angelo; Moreau, François; Novikova, Tatiana
2017-07-01
Significant contrast in visible wavelength Mueller matrix images for healthy and pre-cancerous regions of excised cervical tissue is shown. A novel classification algorithm is used to compute a test statistic from a small patient population.
MERIS Retrieval of Water Quality Components in the Turbid Albemarle-Pamlico Sound Estuary, USA
Biological, geophysical and optical field observations carried out in the Neuse River Estuary, North Carolina, USA were used to develop a semi-empirical optical algorithm for assessing inherent optical properties associated with water quality components (WQCs). Three wavelengths ...
NASA Astrophysics Data System (ADS)
Chen, Shichao; Zhu, Yizheng
2017-02-01
Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.
Lee, ZhongPing; Arnone, Robert; Hu, Chuanmin; Werdell, P Jeremy; Lubac, Bertrand
2010-01-20
Following the theory of error propagation, we developed analytical functions to illustrate and evaluate the uncertainties of inherent optical properties (IOPs) derived by the quasi-analytical algorithm (QAA). In particular, we evaluated the effects of uncertainties of these optical parameters on the inverted IOPs: the absorption coefficient at the reference wavelength, the extrapolation of particle backscattering coefficient, and the spectral ratios of absorption coefficients of phytoplankton and detritus/gelbstoff, respectively. With a systematically simulated data set (46,200 points), we found that the relative uncertainty of QAA-derived total absorption coefficients in the blue-green wavelengths is generally within +/-10% for oceanic waters. The results of this study not only establish theoretical bases to evaluate and understand the effects of the various variables on IOPs derived from remote-sensing reflectance, but also lay the groundwork to analytically estimate uncertainties of these IOPs for each pixel. These are required and important steps for the generation of quality maps of IOP products derived from satellite ocean color remote sensing.
Harnessing speckle for a sub-femtometre resolved broadband wavemeter and laser stabilization
Metzger, Nikolaus Klaus; Spesyvtsev, Roman; Bruce, Graham D.; Miller, Bill; Maker, Gareth T.; Malcolm, Graeme; Mazilu, Michael; Dholakia, Kishan
2017-01-01
The accurate determination and control of the wavelength of light is fundamental to many fields of science. Speckle patterns resulting from the interference of multiple reflections in disordered media are well-known to scramble the information content of light by complex but linear processes. However, these patterns are, in fact, exceptionally rich in information about the illuminating source. We use a fibre-coupled integrating sphere to generate wavelength-dependent speckle patterns, in combination with algorithms based on the transmission matrix method and principal component analysis, to realize a broadband and sensitive wavemeter. We demonstrate sub-femtometre wavelength resolution at a centre wavelength of 780 nm, and a broad calibrated measurement range from 488 to 1,064 nm. This compares favourably to the performance of conventional wavemeters. Using this speckle wavemeter as part of a feedback loop, we stabilize a 780 nm diode laser to achieve a linewidth better than 1 MHz. PMID:28580938
Near-infrared noninvasive spectroscopic determination of pH
Alam, Mary K.; Robinson, Mark R.
1998-08-11
Methods and apparatus for, preferably, determining noninvasively and in vitro pH in a human. The non-invasive method includes the steps of: generating light at three or more different wavelengths in the range of 1000 nm to 2500 nm; irradiating blood containing tissue; measuring the intensities of the wavelengths emerging from the blood containing tissue to obtain a set of at least three spectral intensities v. wavelengths; and determining the unknown values of pH. The determination of pH is made by using measured intensities at wavelengths that exhibit change in absorbance due to histidine titration. Histidine absorbance changes are due to titration by hydrogen ions. The determination of the unknown pH values is performed by at least one multivariate algorithm using two or more variables and at least one calibration model. The determined pH values are within the physiological ranges observed in blood containing tissue. The apparatus includes a tissue positioning device, a source, at least one detector, electronics, a microprocessor, memory, and apparatus for indicating the determined values.
NASA Astrophysics Data System (ADS)
Eriçok, Ozan Burak; Ertürk, Hakan
2018-07-01
Optical characterization of nanoparticle aggregates is a complex inverse problem that can be solved by deterministic or statistical methods. Previous studies showed that there exists a different lower size limit of reliable characterization, corresponding to the wavelength of light source used. In this study, these characterization limits are determined considering a light source wavelength range changing from ultraviolet to near infrared (266-1064 nm) relying on numerical light scattering experiments. Two different measurement ensembles are considered. Collection of well separated aggregates made up of same sized particles and that of having particle size distribution. Filippov's cluster-cluster algorithm is used to generate the aggregates and the light scattering behavior is calculated by discrete dipole approximation. A likelihood-free Approximate Bayesian Computation, relying on Adaptive Population Monte Carlo method, is used for characterization. It is found that when the wavelength range of 266-1064 nm is used, successful characterization limit changes from 21-62 nm effective radius for monodisperse and polydisperse soot aggregates.
New preemptive scheduling for OBS networks considering cascaded wavelength conversion
NASA Astrophysics Data System (ADS)
Gao, Xingbo; Bassiouni, Mostafa A.; Li, Guifang
2009-05-01
In this paper we introduce a new preemptive scheduling technique for next generation optical burst-switched networks considering the impact of cascaded wavelength conversions. It has been shown that when optical bursts are transmitted all optically from source to destination, each wavelength conversion performed along the lightpath may cause certain signal-to-noise deterioration. If the distortion of the signal quality becomes significant enough, the receiver would not be able to recover the original data. Accordingly, subject to this practical impediment, we improve a recently proposed fair channel scheduling algorithm to deal with the fairness problem and aim at burst loss reduction simultaneously in optical burst switching. In our scheme, the dynamic priority associated with each burst is based on a constraint threshold and the number of already conducted wavelength conversions among other factors for this burst. When contention occurs, a new arriving superior burst may preempt another scheduled one according to their priorities. Extensive simulation results have shown that the proposed scheme further improves fairness and achieves burst loss reduction as well.
NASA Astrophysics Data System (ADS)
Yang, Peng; Peng, Yongfei; Ye, Bin; Miao, Lixin
2017-09-01
This article explores the integrated optimization problem of location assignment and sequencing in multi-shuttle automated storage/retrieval systems under the modified 2n-command cycle pattern. The decision of storage and retrieval (S/R) location assignment and S/R request sequencing are jointly considered. An integer quadratic programming model is formulated to describe this integrated optimization problem. The optimal travel cycles for multi-shuttle S/R machines can be obtained to process S/R requests in the storage and retrieval request order lists by solving the model. The small-sized instances are optimally solved using CPLEX. For large-sized problems, two tabu search algorithms are proposed, in which the first come, first served and nearest neighbour are used to generate initial solutions. Various numerical experiments are conducted to examine the heuristics' performance and the sensitivity of algorithm parameters. Furthermore, the experimental results are analysed from the viewpoint of practical application, and a parameter list for applying the proposed heuristics is recommended under different real-life scenarios.
Gobets, Bas; van Stokkum, Ivo H M; van Mourik, Frank; Dekker, Jan P; van Grondelle, Rienk
2003-12-01
The excitation-wavelength dependence of the excited-state dynamics of monomeric and trimeric Photosystem I (PSI) particles from Synechocystis PCC 6803 as well as trimeric PSI particles from Synechococcus elongatus has been studied at room temperature using time-resolved fluorescence spectroscopy. For aselective (400 nm), carotenoid (505 nm), and bulk chlorophyll (approximately 650 nm) excitation in all species, a downhill energy-transfer component is observed, corresponding to a lifetime of 3.4-5.5 ps. For selective red excitation (702-719 nm) in all species, a significantly faster, an approximately 1-ps, uphill transfer component was recorded. In Synechococcus PSI, an additional approximately 10-ps downhill energy-transfer component is found for all wavelengths of excitation, except 719 nm. Each of the species exhibits its own characteristic trap spectrum, the shape of which is independent of the wavelength of excitation. This trap spectrum decays in approximately 23 ps in both monomeric and trimeric Synechocystis PSI and in approximately 35 ps in trimeric Synechococcus PSI. The data were simulated based on the 2.5 A structural model of PSI of Synechococcus elongatus using the Förster equation for energy transfer, and using the 0.6-1-ps charge-separation time and the value of 1.2-1.3 for the index of refraction that were obtained from the dynamics of a hypothetical PSI particle without red chls. The experimentally obtained lifetimes and spectra were reproduced well by assigning three of the chlorophyll-a (chla) dimers observed in the structure to the C708/C702RT pool of red chls present in PSI from both species. Essential for the simulation of the dynamics of Synechococcus PSI is the assignment of the single chla trimer in the structure to the C719/C708RT pool present in this species.
Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics
NASA Technical Reports Server (NTRS)
Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.
2001-01-01
An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.
NASA Astrophysics Data System (ADS)
Saturno, Jorge; Pöhlker, Christopher; Massabò, Dario; Brito, Joel; Carbone, Samara; Cheng, Yafang; Chi, Xuguang; Ditas, Florian; Hrabě de Angelis, Isabella; Morán-Zuloaga, Daniel; Pöhlker, Mira L.; Rizzo, Luciana V.; Walter, David; Wang, Qiaoqiao; Artaxo, Paulo; Prati, Paolo; Andreae, Meinrat O.
2017-08-01
Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP) data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA), which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June-September 2014). The mean absorption coefficient (at 637 nm) during this period was 1.8 ± 2.1 Mm-1, with a maximum of 15.9 Mm-1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS) retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.
Schmidt, M; Fürstenau, N
1999-05-01
A three-wavelength-based passive quadrature digital phase-demodulation scheme has been developed for readout of fiber-optic extrinsic Fabry-Perot interferometer vibration, acoustic, and strain sensors. This scheme uses a superluminescent diode light source with interference filters in front of the photodiodes and real-time arctan calculation. Quasi-static strain and dynamic vibration sensing with up to an 80-kHz sampling rate is demonstrated. Periodic nonlinearities owing to dephasing with increasing fringe number are corrected for with a suitable algorithm, resulting in significant improvement of the linearity of the sensor characteristics.
Retinal oxygen saturation evaluation by multi-spectral fundus imaging
NASA Astrophysics Data System (ADS)
Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James
2007-03-01
Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work is original and is not under consideration for publication elsewhere.
Variables selection methods in near-infrared spectroscopy.
Xiaobo, Zou; Jiewen, Zhao; Povey, Malcolm J W; Holmes, Mel; Hanpin, Mao
2010-05-14
Near-infrared (NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields, such as the petrochemical, pharmaceutical, environmental, clinical, agricultural, food and biomedical sectors during the past 15 years. A NIR spectrum of a sample is typically measured by modern scanning instruments at hundreds of equally spaced wavelengths. The large number of spectral variables in most data sets encountered in NIR spectral chemometrics often renders the prediction of a dependent variable unreliable. Recently, considerable effort has been directed towards developing and evaluating different procedures that objectively identify variables which contribute useful information and/or eliminate variables containing mostly noise. This review focuses on the variable selection methods in NIR spectroscopy. Selection methods include some classical approaches, such as manual approach (knowledge based selection), "Univariate" and "Sequential" selection methods; sophisticated methods such as successive projections algorithm (SPA) and uninformative variable elimination (UVE), elaborate search-based strategies such as simulated annealing (SA), artificial neural networks (ANN) and genetic algorithms (GAs) and interval base algorithms such as interval partial least squares (iPLS), windows PLS and iterative PLS. Wavelength selection with B-spline, Kalman filtering, Fisher's weights and Bayesian are also mentioned. Finally, the websites of some variable selection software and toolboxes for non-commercial use are given. Copyright 2010 Elsevier B.V. All rights reserved.
Fan, Shu-Xiang; Huang, Wen-Qian; Li, Jiang-Bo; Guo, Zhi-Ming; Zhaq, Chun-Jiang
2014-10-01
In order to detect the soluble solids content(SSC)of apple conveniently and rapidly, a ring fiber probe and a portable spectrometer were applied to obtain the spectroscopy of apple. Different wavelength variable selection methods, including unin- formative variable elimination (UVE), competitive adaptive reweighted sampling (CARS) and genetic algorithm (GA) were pro- posed to select effective wavelength variables of the NIR spectroscopy of the SSC in apple based on PLS. The back interval LS- SVM (BiLS-SVM) and GA were used to select effective wavelength variables based on LS-SVM. Selected wavelength variables and full wavelength range were set as input variables of PLS model and LS-SVM model, respectively. The results indicated that PLS model built using GA-CARS on 50 characteristic variables selected from full-spectrum which had 1512 wavelengths achieved the optimal performance. The correlation coefficient (Rp) and root mean square error of prediction (RMSEP) for prediction sets were 0.962, 0.403°Brix respectively for SSC. The proposed method of GA-CARS could effectively simplify the portable detection model of SSC in apple based on near infrared spectroscopy and enhance the predictive precision. The study can provide a reference for the development of portable apple soluble solids content spectrometer.
Yao, Ke-Han; Jiang, Jehn-Ruey; Tsai, Chung-Hsien; Wu, Zong-Syun
2017-01-01
This paper investigates how to efficiently charge sensor nodes in a wireless rechargeable sensor network (WRSN) with radio frequency (RF) chargers to make the network sustainable. An RF charger is assumed to be equipped with a uniform circular array (UCA) of 12 antennas with the radius λ, where λ is the RF wavelength. The UCA can steer most RF energy in a target direction to charge a specific WRSN node by the beamforming technology. Two evolutionary algorithms (EAs) using the evolution strategy (ES), namely the Evolutionary Beamforming Optimization (EBO) algorithm and the Evolutionary Beamforming Optimization Reseeding (EBO-R) algorithm, are proposed to nearly optimize the power ratio of the UCA beamforming peak side lobe (PSL) and the main lobe (ML) aimed at the given target direction. The proposed algorithms are simulated for performance evaluation and are compared with a related algorithm, called Particle Swarm Optimization Gravitational Search Algorithm-Explore (PSOGSA-Explore), to show their superiority. PMID:28825648
NASA Astrophysics Data System (ADS)
Cao, Xiaojun; Anand, Vishal; Qiao, Chunming
2006-12-01
Optical networks using wavelength-division multiplexing (WDM) are the foremost solution to the ever-increasing traffic in the Internet backbone. Rapid advances in WDM technology will enable each fiber to carry hundreds or even a thousand wavelengths (using dense-WDM, or DWDM, and ultra-DWDM) of traffic. This, coupled with worldwide fiber deployment, will bring about a tremendous increase in the size of the optical cross-connects, i.e., the number of ports of the wavelength switching elements. Waveband switching (WBS), wherein wavelengths are grouped into bands and switched as a single entity, can reduce the cost and control complexity of switching nodes by minimizing the port count. This paper presents a detailed study on recent advances and open research issues in WBS networks. In this study, we investigate in detail the architecture for various WBS cross-connects and compare them in terms of the number of ports and complexity and also in terms of how flexible they are in adjusting to dynamic traffic. We outline various techniques for grouping wavelengths into bands for the purpose of WBS and show how traditional wavelength routing is different from waveband routing and why techniques developed for wavelength-routed networks (WRNs) cannot be simply applied to WBS networks. We also outline how traffic grooming of subwavelength traffic can be done in WBS networks. In part II of this study [Cao , submitted to J. Opt. Netw.], we study the effect of wavelength conversion on the performance of WBS networks with reconfigurable MG-OXCs. We present an algorithm for waveband grouping in wavelength-convertible networks and evaluate its performance. We also investigate issues related to survivability in WBS networks and show how waveband and wavelength conversion can be used to recover from failures in WBS networks.
NASA Astrophysics Data System (ADS)
Minkov, D. A.; Gavrilov, G. M.; Moreno, J. M. D.; Vazquez, C. G.; Marquez, E.
2017-03-01
The accuracy of the popular graphical method of Swanepoel (SGM) for the characterization of a thin film on a substrate specimen from its interference transmittance spectrum depends on the subjective choice of four characterization parameters: the slope of the graph, the order number for the longest wavelength extremum, and the two numbers of the extrema used for the calculation approximations of the average film thickness. Here, an error metric is introduced for estimating the accuracy of SGM characterization. An algorithm is proposed for the optimization of SGM, named the OGM algorithm, based on the minimization of this error metric. Its execution provides optimized values of the four characterization parameters, and the respective computation of the most accurate film characteristics achievable within the framework of SGM. Moreover, substrate absorption is accounted for, unlike in the classical SGM, which is beneficial when using modern UV/visible/NIR spectrophotometers due to the relatively larger amount of absorption in the commonly used glass substrates for wavelengths above 1700 nm. A significant increase in the accuracy of the film characteristics is obtained employing the OGM algorithm compared to the SGM algorithm for two model specimens. Such improvements in accuracy increase with increasing film absorption. The results of the film characterization by the OGM algorithm are presented for two specimens containing RF-magnetron-sputtered a-Si films with disparate film thicknesses. The computed average film thicknesses are within 1.1% of the respective film thicknesses measured by SEM for both films. Achieving such high film characterization accuracy is particularly significant for the film with a computed average thickness of 3934 nm, since we are not aware of any other film with such a large thickness that has been characterized by SGM.
Chen, Jun; Quan, Wenting; Cui, Tingwei
2015-01-01
In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sera White
2012-04-01
This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), themore » use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.« less
Hodge, Meryl C; Dixon, Stephanie; Garg, Amit X; Clemens, Kristin K
2017-06-01
To determine the positive predictive value and sensitivity of an International Statistical Classification of Diseases and Related Health Problems, 10th Revision, coding algorithm for hospital encounters concerning hypoglycemia. We carried out 2 retrospective studies in Ontario, Canada. We examined medical records from 2002 through 2014, in which older adults (mean age, 76) were assigned at least 1 code for hypoglycemia (E15, E160, E161, E162, E1063, E1163, E1363, E1463). The positive predictive value of the algorithm was calculated using a gold-standard definition (blood glucose value <4 mmol/L or physician diagnosis of hypoglycemia). To determine the algorithm's sensitivity, we used linked healthcare databases to identify older adults (mean age, 77) with laboratory plasma glucose values <4 mmol/L during a hospital encounter that took place between 2003 and 2011. We assessed how frequently a code for hypoglycemia was present. We also examined the algorithm's performance in differing clinical settings (e.g. inpatient vs. emergency department, by hypoglycemia severity). The positive predictive value of the algorithm was 94.0% (95% confidence interval 89.3% to 97.0%), and its sensitivity was 12.7% (95% confidence interval 11.9% to 13.5%). It performed better in the emergency department and in cases of more severe hypoglycemia (plasma glucose values <3.5 mmol/L compared with ≥3.5 mmol/L). Our hypoglycemia algorithm has a high positive predictive value but is limited in sensitivity. Although we can be confident that older adults who are assigned 1 of these codes truly had a hypoglycemia event, many episodes will not be captured by studies using administrative databases. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.
Streaming fragment assignment for real-time analysis of sequencing experiments
Roberts, Adam; Pachter, Lior
2013-01-01
We present eXpress, a software package for highly efficient probabilistic assignment of ambiguously mapping sequenced fragments. eXpress uses a streaming algorithm with linear run time and constant memory use. It can determine abundances of sequenced molecules in real time, and can be applied to ChIP-seq, metagenomics and other large-scale sequencing data. We demonstrate its use on RNA-seq data, showing greater efficiency than other quantification methods. PMID:23160280
FD/DAMA Scheme For Mobile/Satellite Communications
NASA Technical Reports Server (NTRS)
Yan, Tsun-Yee; Wang, Charles C.; Cheng, Unjeng; Rafferty, William; Dessouky, Khaled I.
1992-01-01
Integrated-Adaptive Mobile Access Protocol (I-AMAP) proposed to allocate communication channels to subscribers in first-generation MSAT-X mobile/satellite communication network. Based on concept of frequency-division/demand-assigned multiple access (FD/DAMA) where partition of available spectrum adapted to subscribers' demands for service. Requests processed, and competing requests resolved according to channel-access protocol, or free-access tree algorithm described in "Connection Protocol for Mobile/Satellite Communications" (NPO-17735). Assigned spectrum utilized efficiently.
Validation Studies of the Accuracy of Various SO2 Gas Retrievals in the Thermal InfraRed (8-14 μm)
NASA Astrophysics Data System (ADS)
Gabrieli, A.; Wright, R.; Lucey, P. G.; Porter, J. N.; Honniball, C.; Garbeil, H.; Wood, M.
2016-12-01
Quantifying hazardous SO2 in the atmosphere and in volcanic plumes is important for public health and volcanic eruption prediction. Remote sensing measurements of spectral radiance of plumes contain information on the abundance of SO2. However, in order to convert such measurements into SO2 path-concentrations, reliable inversion algorithms are needed. Various techniques can be employed to derive SO2 path-concentrations. The first approach employs a Partial Least Square Regression model trained using MODTRAN5 simulations for a variety of plume and atmospheric conditions. Radiances at many spectral wavelengths (8-14 μm) were used in the algorithm. The second algorithm uses measurements inside and outside the SO2 plume. Measurements in the plume-free region (background sky) make it possible to remove background atmospheric conditions and any instrumental effects. After atmospheric and instrumental effects are removed, MODTRAN5 is used to fit the SO2 spectral feature and obtain SO2 path-concentrations. The two inversion algorithms described above can be compared with the inversion algorithm for SO2 retrievals developed by Prata and Bernardo (2014). Their approach employs three wavelengths to characterize the plume temperature, the atmospheric background, and the SO2 path-concentration. The accuracy of these various techniques requires further investigation in terms of the effects of different atmospheric background conditions. Validating these inversion algorithms is challenging because ground truth measurements are very difficult. However, if the three separate inversion algorithms provide similar SO2 path-concentrations for actual measurements with various background conditions, then this increases confidence in the results. Measurements of sky radiance when looking through SO2 filled gas cells were collected with a Thermal Hyperspectral Imager (THI) under various atmospheric background conditions. These data were processed using the three inversion approaches, which were tested for convergence on the known SO2 gas cell path-concentrations. For this study, the inversion algorithms were modified to account for the gas cell configuration. Results from these studies will be presented, as well as results from SO2 gas plume measurements at Kīlauea volcano, Hawai'i.
Design and optimization of all-optical networks
NASA Astrophysics Data System (ADS)
Xiao, Gaoxi
1999-10-01
In this thesis, we present our research results on the design and optimization of all-optical networks. We divide our results into the following four parts: 1.In the first part, we consider broadcast-and-select networks. In our research, we propose an alternative and cheaper network configuration to hide the tuning time. In addition, we derive lower bounds on the optimal schedule lengths and prove that they are tighter than the best existing bounds. 2.In the second part, we consider all-optical wide area networks. We propose a set of algorithms for allocating a given number of WCs to the nodes. We adopt a simulation-based optimization approach, in which we collect utilization statistics of WCs from computer simulation and then perform optimization to allocate the WCs. Therefore, our algorithms are widely applicable and they are not restricted to any particular model and assumption. We have conducted extensive computer simulation on regular and irregular networks under both uniform and non-uniform traffic. We see that our method can get nearly the same performance as that of full wavelength conversion by using a much smaller number of WCs. Compared with the best existing method, the results show that our algorithms can significantly reduce (1)the overall blocking probability (i.e., better mean quality of service) and (2)the maximum of the blocking probabilities experienced at all the source nodes (i.e., better fairness). Equivalently, for a given performance requirement on blocking probability, our algorithms can significantly reduce the number of WCs required. 3.In the third part, we design and optimize the physical topology of all-optical wide area networks. We show that the design problem is NP-complete and we propose a heuristic algorithm called two-stage cut saturation algorithm for this problem. Simulation results show that (1)the proposed algorithm can efficiently design networks with low cost and high utilization, and (2)if wavelength converters are available to support full wavelength conversion, the cost of the links can be significantly reduced. 4.In the fourth part, we consider all-optical wide area networks with multiple fibers per link. We design a node configuration for all-optical networks. We exploit the flexibility that, to establish a lightpath across a node, we can select any one of the available channels in the incoming link and any one of the available channels in the outgoing link. As a result, the proposed node configuration requires a small number of small optical switches while it can achieve nearly the same performance as the existing one. And there is no additional crosstalk other than the intrinsic crosstalk within each single-chip optical switch.* (Abstract shortened by UMI.) *Originally published in DAI Vol. 60, No. 2. Reprinted here with corrected author name.
Ontological Problem-Solving Framework for Dynamically Configuring Sensor Systems and Algorithms
Qualls, Joseph; Russomanno, David J.
2011-01-01
The deployment of ubiquitous sensor systems and algorithms has led to many challenges, such as matching sensor systems to compatible algorithms which are capable of satisfying a task. Compounding the challenges is the lack of the requisite knowledge models needed to discover sensors and algorithms and to subsequently integrate their capabilities to satisfy a specific task. A novel ontological problem-solving framework has been designed to match sensors to compatible algorithms to form synthesized systems, which are capable of satisfying a task and then assigning the synthesized systems to high-level missions. The approach designed for the ontological problem-solving framework has been instantiated in the context of a persistence surveillance prototype environment, which includes profiling sensor systems and algorithms to demonstrate proof-of-concept principles. Even though the problem-solving approach was instantiated with profiling sensor systems and algorithms, the ontological framework may be useful with other heterogeneous sensing-system environments. PMID:22163793
THE PSTD ALGORITHM: A TIME-DOMAIN METHOD REQUIRING ONLY TWO CELLS PER WAVELENGTH. (R825225)
A pseudospectral time-domain (PSTD) method is developed for solutions of Maxwell's equations. It uses the fast Fourier transform (FFT), instead of finite differences on conventional finite-difference-time-domain (FDTD) methods, to represent spatial derivatives. Because the Fourie...
SPHINX--an algorithm for taxonomic binning of metagenomic sequences.
Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Singh, Nitin Kumar; Mande, Sharmila S
2011-01-01
Compared with composition-based binning algorithms, the binning accuracy and specificity of alignment-based binning algorithms is significantly higher. However, being alignment-based, the latter class of algorithms require enormous amount of time and computing resources for binning huge metagenomic datasets. The motivation was to develop a binning approach that can analyze metagenomic datasets as rapidly as composition-based approaches, but nevertheless has the accuracy and specificity of alignment-based algorithms. This article describes a hybrid binning approach (SPHINX) that achieves high binning efficiency by utilizing the principles of both 'composition'- and 'alignment'-based binning algorithms. Validation results with simulated sequence datasets indicate that SPHINX is able to analyze metagenomic sequences as rapidly as composition-based algorithms. Furthermore, the binning efficiency (in terms of accuracy and specificity of assignments) of SPHINX is observed to be comparable with results obtained using alignment-based algorithms. A web server for the SPHINX algorithm is available at http://metagenomics.atc.tcs.com/SPHINX/.
Advanced Techniques for Scene Analysis
2010-06-01
robustness prefers a bigger intergration window to handle larger motions. The advantage of pyramidal implementation is that, while each motion vector dL...labeled SAR images. Now the previous algorithm leads to a more dedicated classifier for the particular target; however, our algorithm trades generality for...accuracy is traded for generality. 7.3.2 I-RELIEF Feature weighting transforms the original feature vector x into a new feature vector x′ by assigning each
Hara, Risa; Ishigaki, Mika; Kitahama, Yasutaka; Ozaki, Yukihiro; Genkawa, Takuma
2018-08-30
The difference in Raman spectra for different excitation wavelengths (532 nm, 785 nm, and 1064 nm) was investigated to identify an appropriate wavelength for the quantitative analysis of carotenoids in tomatoes. For the 532 nm-excited Raman spectra, the intensity of the peak assigned to the carotenoid has no correlation with carotenoid concentration, and the peak shift reflects carotenoid composition changing from lycopene to β-carotene and lutein. Thus, 532 nm-excited Raman spectra are useful for the qualitative analysis of carotenoids. For the 785 nm- and 1064 nm-excited Raman spectra, the peak intensity of the carotenoid showed good correlation with carotenoid concentration; thus, regression models for carotenoid concentration were developed using these Raman spectra and partial least squares regression. A regression model designed using the 785 nm-excited Raman spectra showed a better result than the 532 nm- and 1064 nm-excited Raman spectra. Therefore, it can be concluded that 785 nm is the most suitable excitation wavelength for the quantitative analysis of carotenoid concentration in tomatoes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Yang, Bing; Ren, Lingling; Li, Luming; Tao, Xingfu; Shi, Yunhua; Zheng, Yudong
2013-11-07
Current and future applications of single-wall carbon nanotubes (SWCNTs) depend on the dispersion of the SWCNTs in aqueous solution and their quantitation. The concentration of SWCNTs is an important indicator to evaluate the dispersibility of the surfactant-dispersed SWCNTs suspension. Due to the complexity of the SWCNTs suspension, it is necessary to determine both the total concentration of the dispersed SWCNTs and the concentration of individually dispersed SWCNTs in aqueous suspensions, and these were evaluated through the absorbance and the resonance ratios of UV-Vis-NIR absorption spectra, respectively. However, there is no specific and reliable position assigned for either calculation of the absorbance or the resonance ratio of the UV-Vis-NIR absorption spectrum. In this paper, different ranges of wavelengths for these two parameters were studied. From this, we concluded that the wavelength range between 300 nm and 600 nm should be the most suitable for evaluation of the total concentration of dispersed SWCNTs in the suspension; also, wavelengths below 800 nm should be most suitable for evaluation of the concentration of individually dispersed SWCNTs in the suspension. Moreover, these wavelength ranges are verified by accurate dilution experiments.
Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.
Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling
2015-11-01
In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.
Hirdes, John P; Poss, Jeff W; Curtin-Telegdi, Nancy
2008-01-01
Background Home care plays a vital role in many health care systems, but there is evidence that appropriate targeting strategies must be used to allocate limited home care resources effectively. The aim of the present study was to develop and validate a methodology for prioritizing access to community and facility-based services for home care clients. Methods Canadian and international data based on the Resident Assessment Instrument – Home Care (RAI-HC) were analyzed to identify predictors for nursing home placement, caregiver distress and for being rated as requiring alternative placement to improve outlook. Results The Method for Assigning Priority Levels (MAPLe) algorithm was a strong predictor of all three outcomes in the derivation sample. The algorithm was validated with additional data from five other countries, three other provinces, and an Ontario sample obtained after the use of the RAI-HC was mandated. Conclusion The MAPLe algorithm provides a psychometrically sound decision-support tool that may be used to inform choices related to allocation of home care resources and prioritization of clients needing community or facility-based services. PMID:18366782
Fuzzy Classification of Ocean Color Satellite Data for Bio-optical Algorithm Constituent Retrievals
NASA Technical Reports Server (NTRS)
Campbell, Janet W.
1998-01-01
The ocean has been traditionally viewed as a 2 class system. Morel and Prieur (1977) classified ocean water according to the dominant absorbent particle suspended in the water column. Case 1 is described as having a high concentration of phytoplankton (and detritus) relative to other particles. Conversely, case 2 is described as having inorganic particles such as suspended sediments in high concentrations. Little work has gone into the problem of mixing bio-optical models for these different water types. An approach is put forth here to blend bio-optical algorithms based on a fuzzy classification scheme. This scheme involves two procedures. First, a clustering procedure identifies classes and builds class statistics from in-situ optical measurements. Next, a classification procedure assigns satellite pixels partial memberships to these classes based on their ocean color reflectance signature. These membership assignments can be used as the basis for a weighting retrievals from class-specific bio-optical algorithms. This technique is demonstrated with in-situ optical measurements and an image from the SeaWiFS ocean color satellite.
2015-01-01
False negative docking outcomes for highly symmetric molecules are a barrier to the accurate evaluation of docking programs, scoring functions, and protocols. This work describes an implementation of a symmetry-corrected root-mean-square deviation (RMSD) method into the program DOCK based on the Hungarian algorithm for solving the minimum assignment problem, which dynamically assigns atom correspondence in molecules with symmetry. The algorithm adds only a trivial amount of computation time to the RMSD calculations and is shown to increase the reported overall docking success rate by approximately 5% when tested over 1043 receptor–ligand systems. For some families of protein systems the results are even more dramatic, with success rate increases up to 16.7%. Several additional applications of the method are also presented including as a pairwise similarity metric to compare molecules during de novo design, as a scoring function to rank-order virtual screening results, and for the analysis of trajectories from molecular dynamics simulation. The new method, including source code, is available to registered users of DOCK6 (http://dock.compbio.ucsf.edu). PMID:24410429
Wavelength-resolved emission spectroscopy of the alkoxy and alkylthio radicals in a supersonic jet
NASA Technical Reports Server (NTRS)
Misra, Prabhakar; Zhu, Xinming; Hsueh, Ching-Yu; Kamal, Mohammed M.
1993-01-01
Wavelength-resolved emission spectra of methoxy (CH3O) and methylthio (CH3S) radicals have been obtained in a supersonic jet environment with a resolution of 0.3 nm by dispersing the total laser-induced fluorescence with a 0.6 m monochromator. A detailed analysis of the single vibronic level dispersed fluorescence spectra yields the following vibrational frequencies for CH3O in the X(2)E state; nu(sub 1 double prime) = 2953/cm, nu(sub 2 double prime) = 1375/cm, nu(sub 3 double prime) = 1062/cm, nu(sub 4 double prime) = 2869/cm, nu(sub 5 double prime) = 1528/cm and nu(sub 6 double prime) = 688/cm. A similar analysis of the wavelength-resolved emission spectra of CH3S provides the following ground state vibrational frequencies: nu(sub 2 double prime) = 1329/cm, nu(sub 3 double prime) = 739/cm and nu(sub 6 double prime) = 601/cm. An experimental uncertainty of 20/cm is estimated for the assigned frequencies.
Testing an earthquake prediction algorithm
Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.
1997-01-01
A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.
Zhao, Chuan-Li; Hsu, Hua-Feng
2014-01-01
This paper considers single machine scheduling and due date assignment with setup time. The setup time is proportional to the length of the already processed jobs; that is, the setup time is past-sequence-dependent (p-s-d). It is assumed that a job's processing time depends on its position in a sequence. The objective functions include total earliness, the weighted number of tardy jobs, and the cost of due date assignment. We analyze these problems with two different due date assignment methods. We first consider the model with job-dependent position effects. For each case, by converting the problem to a series of assignment problems, we proved that the problems can be solved in O(n 4) time. For the model with job-independent position effects, we proved that the problems can be solved in O(n 3) time by providing a dynamic programming algorithm. PMID:25258727
Zhao, Chuan-Li; Hsu, Chou-Jung; Hsu, Hua-Feng
2014-01-01
This paper considers single machine scheduling and due date assignment with setup time. The setup time is proportional to the length of the already processed jobs; that is, the setup time is past-sequence-dependent (p-s-d). It is assumed that a job's processing time depends on its position in a sequence. The objective functions include total earliness, the weighted number of tardy jobs, and the cost of due date assignment. We analyze these problems with two different due date assignment methods. We first consider the model with job-dependent position effects. For each case, by converting the problem to a series of assignment problems, we proved that the problems can be solved in O(n(4)) time. For the model with job-independent position effects, we proved that the problems can be solved in O(n(3)) time by providing a dynamic programming algorithm.
Bezinge, Leonard; Maceiczyk, Richard M; Lignos, Ioannis; Kovalenko, Maksym V; deMello, Andrew J
2018-06-06
Recent advances in the development of hybrid organic-inorganic lead halide perovskite (LHP) nanocrystals (NCs) have demonstrated their versatility and potential application in photovoltaics and as light sources through compositional tuning of optical properties. That said, due to their compositional complexity, the targeted synthesis of mixed-cation and/or mixed-halide LHP NCs still represents an immense challenge for traditional batch-scale chemistry. To address this limitation, we herein report the integration of a high-throughput segmented-flow microfluidic reactor and a self-optimizing algorithm for the synthesis of NCs with defined emission properties. The algorithm, named Multiparametric Automated Regression Kriging Interpolation and Adaptive Sampling (MARIA), iteratively computes optimal sampling points at each stage of an experimental sequence to reach a target emission peak wavelength based on spectroscopic measurements. We demonstrate the efficacy of the method through the synthesis of multinary LHP NCs, (Cs/FA)Pb(I/Br) 3 (FA = formamidinium) and (Rb/Cs/FA)Pb(I/Br) 3 NCs, using MARIA to rapidly identify reagent concentrations that yield user-defined photoluminescence peak wavelengths in the green-red spectral region. The procedure returns a robust model around a target output in far fewer measurements than systematic screening of parametric space and additionally enables the prediction of other spectral properties, such as, full-width at half-maximum and intensity, for conditions yielding NCs with similar emission peak wavelength.
Fleet Assignment Using Collective Intelligence
NASA Technical Reports Server (NTRS)
Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.
2004-01-01
Airline fleet assignment involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of an agent-based integer optimization algorithm to a "cold start" fleet assignment problem. Results show that the optimizer can successfully solve such highly- constrained problems (129 variables, 184 constraints).
Cloud classification from satellite data using a fuzzy sets algorithm: A polar example
NASA Technical Reports Server (NTRS)
Key, J. R.; Maslanik, J. A.; Barry, R. G.
1988-01-01
Where spatial boundaries between phenomena are diffuse, classification methods which construct mutually exclusive clusters seem inappropriate. The Fuzzy c-means (FCM) algorithm assigns each observation to all clusters, with membership values as a function of distance to the cluster center. The FCM algorithm is applied to AVHRR data for the purpose of classifying polar clouds and surfaces. Careful analysis of the fuzzy sets can provide information on which spectral channels are best suited to the classification of particular features, and can help determine likely areas of misclassification. General agreement in the resulting classes and cloud fraction was found between the FCM algorithm, a manual classification, and an unsupervised maximum likelihood classifier.
Packets Distributing Evolutionary Algorithm Based on PSO for Ad Hoc Network
NASA Astrophysics Data System (ADS)
Xu, Xiao-Feng
2018-03-01
Wireless communication network has such features as limited bandwidth, changeful channel and dynamic topology, etc. Ad hoc network has lots of difficulties in accessing control, bandwidth distribution, resource assign and congestion control. Therefore, a wireless packets distributing Evolutionary algorithm based on PSO (DPSO)for Ad Hoc Network is proposed. Firstly, parameters impact on performance of network are analyzed and researched to obtain network performance effective function. Secondly, the improved PSO Evolutionary Algorithm is used to solve the optimization problem from local to global in the process of network packets distributing. The simulation results show that the algorithm can ensure fairness and timeliness of network transmission, as well as improve ad hoc network resource integrated utilization efficiency.
Salehpour, Mehdi; Behrad, Alireza
2017-10-01
This study proposes a new algorithm for nonrigid coregistration of synthetic aperture radar (SAR) and optical images. The proposed algorithm employs point features extracted by the binary robust invariant scalable keypoints algorithm and a new method called weighted bidirectional matching for initial correspondence. To refine false matches, we assume that the transformation between SAR and optical images is locally rigid. This property is used to refine false matches by assigning scores to matched pairs and clustering local rigid transformations using a two-layer Kohonen network. Finally, the thin plate spline algorithm and mutual information are used for nonrigid coregistration of SAR and optical images.
NASA Astrophysics Data System (ADS)
Omar, Saad; Omeragic, Dzevat
2018-04-01
The concept of apparent thicknesses is introduced for the inversion-based, multicasing evaluation interpretation workflow using multifrequency and multispacing electromagnetic measurements. A thickness value is assigned to each measurement, enabling the development of two new preprocessing algorithms to remove casing collar artifacts. First, long-spacing apparent thicknesses are used to remove, from the pipe sections, artifacts ("ghosts") caused by the transmitter crossing a casing collar or corrosion. Second, a collar identification, localization, and assignment algorithm is developed to enable robust inversion in collar sections. Last, casing eccentering can also be identified on the basis of opposite deviation of short-spacing phase and magnitude apparent thicknesses from the nominal value. The proposed workflow can handle an arbitrary number of nested casings and has been validated on synthetic and field data.
Improved ocean-color remote sensing in the Arctic using the POLYMER algorithm
NASA Astrophysics Data System (ADS)
Frouin, Robert; Deschamps, Pierre-Yves; Ramon, Didier; Steinmetz, François
2012-10-01
Atmospheric correction of ocean-color imagery in the Arctic brings some specific challenges that the standard atmospheric correction algorithm does not address, namely low solar elevation, high cloud frequency, multi-layered polar clouds, presence of ice in the field-of-view, and adjacency effects from highly reflecting surfaces covered by snow and ice and from clouds. The challenges may be addressed using a flexible atmospheric correction algorithm, referred to as POLYMER (Steinmetz and al., 2011). This algorithm does not use a specific aerosol model, but fits the atmospheric reflectance by a polynomial with a non spectral term that accounts for any non spectral scattering (clouds, coarse aerosol mode) or reflection (glitter, whitecaps, small ice surfaces within the instrument field of view), a spectral term with a law in wavelength to the power -1 (fine aerosol mode), and a spectral term with a law in wavelength to the power -4 (molecular scattering, adjacency effects from clouds and white surfaces). Tests are performed on selected MERIS imagery acquired over Arctic Seas. The derived ocean properties, i.e., marine reflectance and chlorophyll concentration, are compared with those obtained with the standard MEGS algorithm. The POLYMER estimates are more realistic in regions affected by the ice environment, e.g., chlorophyll concentration is higher near the ice edge, and spatial coverage is substantially increased. Good retrievals are obtained in the presence of thin clouds, with ocean-color features exhibiting spatial continuity from clear to cloudy regions. The POLYMER estimates of marine reflectance agree better with in situ measurements than the MEGS estimates. Biases are 0.001 or less in magnitude, except at 412 and 443 nm, where they reach 0.005 and 0.002, respectively, and root-mean-squared difference decreases from 0.006 at 412 nm to less than 0.001 at 620 and 665 nm. A first application to MODIS imagery is presented, revealing that the POLYMER algorithm is robust when pixels are contaminated by sea ice.
Sun, Jun; Zhou, Xin; Wu, Xiaohong; Zhang, Xiaodong; Li, Qinglin
2016-02-26
Fast identification of moisture content in tobacco plant leaves plays a key role in the tobacco cultivation industry and benefits the management of tobacco plant in the farm. In order to identify moisture content of tobacco plant leaves in a fast and nondestructive way, a method involving Mahalanobis distance coupled with Monte Carlo cross validation(MD-MCCV) was proposed to eliminate outlier sample in this study. The hyperspectral data of 200 tobacco plant leaf samples of 20 moisture gradients were obtained using FieldSpc(®) 3 spectrometer. Savitzky-Golay smoothing(SG), roughness penalty smoothing(RPS), kernel smoothing(KS) and median smoothing(MS) were used to preprocess the raw spectra. In addition, Mahalanobis distance(MD), Monte Carlo cross validation(MCCV) and Mahalanobis distance coupled to Monte Carlo cross validation(MD-MCCV) were applied to select the outlier sample of the raw spectrum and four smoothing preprocessing spectra. Successive projections algorithm (SPA) was used to extract the most influential wavelengths. Multiple Linear Regression (MLR) was applied to build the prediction models based on preprocessed spectra feature in characteristic wavelengths. The results showed that the preferably four prediction model were MD-MCCV-SG (Rp(2) = 0.8401 and RMSEP = 0.1355), MD-MCCV-RPS (Rp(2) = 0.8030 and RMSEP = 0.1274), MD-MCCV-KS (Rp(2) = 0.8117 and RMSEP = 0.1433), MD-MCCV-MS (Rp(2) = 0.9132 and RMSEP = 0.1162). MD-MCCV algorithm performed best among MD algorithm, MCCV algorithm and the method without sample pretreatment algorithm in the eliminating outlier sample from 20 different moisture gradients of tobacco plant leaves and MD-MCCV can be used to eliminate outlier sample in the spectral preprocessing. Copyright © 2016 Elsevier Inc. All rights reserved.
Bio-geo-optical data collected in the Neuse River Estuary, North Carolina, USA were used to develop a semi-empirical optical algorithm for assessing inherent optical properties associated with water quality components (WQCs). Three wavelengths (560, 665 and 709 nm) were explored ...
Transmit Designs for the MIMO Broadcast Channel With Statistical CSI
NASA Astrophysics Data System (ADS)
Wu, Yongpeng; Jin, Shi; Gao, Xiqi; McKay, Matthew R.; Xiao, Chengshan
2014-09-01
We investigate the multiple-input multiple-output broadcast channel with statistical channel state information available at the transmitter. The so-called linear assignment operation is employed, and necessary conditions are derived for the optimal transmit design under general fading conditions. Based on this, we introduce an iterative algorithm to maximize the linear assignment weighted sum-rate by applying a gradient descent method. To reduce complexity, we derive an upper bound of the linear assignment achievable rate of each receiver, from which a simplified closed-form expression for a near-optimal linear assignment matrix is derived. This reveals an interesting construction analogous to that of dirty-paper coding. In light of this, a low complexity transmission scheme is provided. Numerical examples illustrate the significant performance of the proposed low complexity scheme.
Frequency Assignments for HFDF Receivers in a Search and Rescue Network
1990-03-01
SAR problem where whether or not a signal is detected by RS or HFDF at the various stations is described by probabilities. Daskin assumes the...allows the problem to be formulated with a linear objective function (6:52-53). Daskin also developed a heuristic solution algorithm to solve this...en CM in o CM CM < I Q < - -.~- -^ * . . . ■ . ,■ . :ST.-.r . 5 Frequency Assignments for HFDF Receivers in a Search and
The role of charity care and primary care physician assignment on ED use in homeless patients.
Wang, Hao; Nejtek, Vicki A; Zieger, Dawn; Robinson, Richard D; Schrader, Chet D; Phariss, Chase; Ku, Jocelyn; Zenarosa, Nestor R
2015-08-01
Homeless patients are a vulnerable population with a higher incidence of using the emergency department (ED) for noncrisis care. Multiple charity programs target their outreach toward improving the health of homeless patients, but few data are available on the effectiveness of reducing ED recidivism. The aim of this study is to determine whether inappropriate ED use for nonemergency care may be reduced by providing charity insurance and assigning homeless patients to a primary care physician (PCP) in an outpatient clinic setting. A retrospective medical records review of homeless patients presenting to the ED and receiving treatment between July 2013 and June 2014 was completed. Appropriate vs inappropriate use of the ED was determined using the New York University ED Algorithm. The association between patients with charity care coverage, PCP assignment status, and appropriate vs inappropriate ED use was analyzed and compared. Following New York University ED Algorithm standards, 76% of all ED visits were deemed inappropriate with approximately 77% of homeless patients receiving charity care and 74% of patients with no insurance seeking noncrisis health care in the ED (P=.112). About 50% of inappropriate ED visits and 43.84% of appropriate ED visits occurred in patients with a PCP assignment (P=.019). Both charity care homeless patients and those without insurance coverage tend to use the ED for noncrisis care resulting in high rates of inappropriate ED use. Simply providing charity care and/or PCP assignment does not seem to sufficiently reduce inappropriate ED use in homeless patients. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms
NASA Astrophysics Data System (ADS)
Kwok, Kwan S.; Driessen, Brian J.; Phillips, Cynthia A.; Tovey, Craig A.
1997-09-01
This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.
An algorithm for the arithmetic classification of multilattices.
Indelicato, Giuliana
2013-01-01
A procedure for the construction and the classification of monoatomic multilattices in arbitrary dimension is developed. The algorithm allows one to determine the location of the points of all monoatomic multilattices with a given symmetry, or to determine whether two assigned multilattices are arithmetically equivalent. This approach is based on ideas from integral matrix theory, in particular the reduction to the Smith normal form, and can be coded to provide a classification software package.
A method of operation scheduling based on video transcoding for cluster equipment
NASA Astrophysics Data System (ADS)
Zhou, Haojie; Yan, Chun
2018-04-01
Because of the cluster technology in real-time video transcoding device, the application of facing the massive growth in the number of video assignments and resolution and bit rate of diversity, task scheduling algorithm, and analyze the current mainstream of cluster for real-time video transcoding equipment characteristics of the cluster, combination with the characteristics of the cluster equipment task delay scheduling algorithm is proposed. This algorithm enables the cluster to get better performance in the generation of the job queue and the lower part of the job queue when receiving the operation instruction. In the end, a small real-time video transcode cluster is constructed to analyze the calculation ability, running time, resource occupation and other aspects of various algorithms in operation scheduling. The experimental results show that compared with traditional clustering task scheduling algorithm, task delay scheduling algorithm has more flexible and efficient characteristics.
NASA Astrophysics Data System (ADS)
Bhutta, M. Raheel; Hong, Keum-Shik; Kim, Beop-Min; Hong, Melissa Jiyoun; Kim, Yun-Hee; Lee, Se-Ho
2014-02-01
Given that approximately 80% of blood is water, we develop a wireless functional near-infrared spectroscopy system that detects not only the concentration changes of oxy- and deoxy-hemoglobin (HbO and HbR) during mental activity but also that of water (H2O). Additionally, it implements a water-absorption correction algorithm that improves the HbO and HbR signal strengths during an arithmetic task. The system comprises a microcontroller, an optical probe, tri-wavelength light emitting diodes, photodiodes, a WiFi communication module, and a battery. System functionality was tested by means of arithmetic-task experiments performed by healthy male subjects.
Bhutta, M Raheel; Hong, Keum-Shik; Kim, Beop-Min; Hong, Melissa Jiyoun; Kim, Yun-Hee; Lee, Se-Ho
2014-02-01
Given that approximately 80% of blood is water, we develop a wireless functional near-infrared spectroscopy system that detects not only the concentration changes of oxy- and deoxy-hemoglobin (HbO and HbR) during mental activity but also that of water (H2O). Additionally, it implements a water-absorption correction algorithm that improves the HbO and HbR signal strengths during an arithmetic task. The system comprises a microcontroller, an optical probe, tri-wavelength light emitting diodes, photodiodes, a WiFi communication module, and a battery. System functionality was tested by means of arithmetic-task experiments performed by healthy male subjects.
Rain rate range profiling from a spaceborne radar
NASA Technical Reports Server (NTRS)
Meneghini, R.
1980-01-01
At certain frequencies and incidence angles the relative invariance of the surface scattering properites over land can be used to estimate the total attenuation and the integrated rain from a spaceborne attenuation-wavelength radar. The technique is generalized so that rain rate profiles along the radar beam can be estimated, i.e., rain rate determination at each range bin. This is done by modifying the standard algorithm for an attenuating-wavelength radar to include in it the measurement of the total attenuation. Simple error analyses of the estimates show that this type of profiling is possible if the total attenuation can be measured with a modest degree of accuracy.
Evaluation of Long-term Aerosol Data Records from SeaWiFS over Land and Ocean
NASA Astrophysics Data System (ADS)
Bettenhausen, C.; Hsu, C.; Jeong, M.; Huang, J.
2010-12-01
Deserts around the globe produce mineral dust aerosols that may then be transported over cities, across continents, or even oceans. These aerosols affect the Earth’s energy balance through direct and indirect interactions with incoming solar radiation. They also have a biogeochemical effect as they deliver scarce nutrients to remote ecosystems. Large dust storms regularly disrupt air traffic and are a general nuisance to those living in transport regions. In the past, measuring dust aerosols has been incomplete at best. Satellite retrieval algorithms were limited to oceans or vegetated surfaces and typically neglected desert regions due to their high surface reflectivity in the mid-visible and near-infrared wavelengths, which have been typically used for aerosol retrievals. The Deep Blue aerosol retrieval algorithm was developed to resolve these shortcomings by utilizing the blue channels from instruments such as the Sea-Viewing Wide-Field-of-View Sensor (SeaWiFS) and the Moderate Resolution Imaging Spectroradiometer (MODIS) to infer aerosol properties over these highly reflective surfaces. The surface reflectivity of desert regions is much lower in the blue channels and thus it is easier to separate the aerosol and surface signals than at the longer wavelengths used in other algorithms. More recently, the Deep Blue algorithm has been expanded to retrieve over vegetated surfaces and oceans as well. A single algorithm can now follow dust from source to sink. In this work, we introduce the SeaWiFS instrument and the Deep Blue aerosol retrieval algorithm. We have produced global aerosol data records over land and ocean from 1997 through 2009 using the Deep Blue algorithm and SeaWiFS data. We describe these data records and validate them with data from the Aerosol Robotic Network (AERONET). We also show the relative performance compared to the current MODIS Deep Blue operational aerosol data in desert regions. The current results are encouraging and this dataset will be useful to future studies in understanding the effects of dust aerosols on global processes, long-term aerosol trends, quantifying dust emissions, transport, and inter-annual variability.
Multi-Wavelength Optical Pyrometry Investigation for Turbine Engine Applications.
NASA Astrophysics Data System (ADS)
Estevadeordal, Jordi; Nirmalan, Nirm; Wang, Guanghua; Thermal Systems Team
2011-11-01
An investigation of optical Pyrometry using multiple wavelengths and its application to turbine engine is presented. Current turbine engine Pyrometers are typically broadband Si-detector line-of-sight (LOS) systems. They identify hot spots and spall areas in blades and bucket passages by detection of bursts of higher voltage signals. However, the single color signal can be misleading for estimating temperature and emissivity variations in these bursts. Results of the radiant temperature, multi-color temperature and apparent emissivity are presented for turbine engine applications. For example, the results indicate that spall regions can be characterized using multi-wavelength techniques by showing that the temperature typically drops and the emissivity increases and that differentiates from the emissivity of the normal regions. Burst signals are analyzed with multicolor algorithms and changes in the LOS hot-gas-path properties and in the suction side, trailing edge, pressure side, fillet and platform surfaces characterized.
The analysis of GEOS-3 altimeter data in the Tasman and Coral seas
NASA Technical Reports Server (NTRS)
Mather, R. S.
1977-01-01
A technique was developed for preprocessing GEOS-3 altimetry data to establish a model of the regional sea surface. The algorithms developed models for a 35,000,000 sq km area with an internal precision of + or - 1 m. There were discrepancies between the sea surface model so obtained and GEM6 based geoid profiles with wavelengths of approximately 2500 km and amplitudes of up to 5 m in this region. The amplitudes were smaller when compared with GEM10-based geoid determinations. However, the comparison of 14 pairs of overlapping passes in the region indicated altimeter resolution of the + or - 25 cm level if the wavelength corresponding to the Nyquist frequency were 30 km. The spectral analysis of such comparisons indicated the existence of significant signal strength in the discrepancies after least squares fitting, with wavelengths in excess of 200 km.
Design of polarization insensitive filters with micro- and nano-grating structures
NASA Astrophysics Data System (ADS)
Wang, Wen-liang; Rong, Xiao-hong
2014-03-01
For isotropic dielectric thin films, polarization effect is an inherent characteristic. As it will make the performance of optical-electric system go to bad, such polarization-dependent properties are often intolerable and should be eliminated in many applications. In this paper, based on a micro- and nano-optical structure whose period consists of four parts, a polarization insensitive filter is obtained by combining rigorous wave theory and multi-objective immune optimization algorithm. Its working wavelength is 1315 nm which is often used in laser systems. The results of our design show that TE and TM polarized waves have reflectivities of 0.482 and 0.485, respectively at designed wavelength of 1315 nm. And it denotes that two values are both close to the design values, their difference is only 0.003, and polarization deviation is also very little. Therefore, the designed filter can eliminate the effect of polarization deviation very well at 1315 nm wavelength.
Joint de-blurring and nonuniformity correction method for infrared microscopy imaging
NASA Astrophysics Data System (ADS)
Jara, Anselmo; Torres, Sergio; Machuca, Guillermo; Ramírez, Wagner; Gutiérrez, Pablo A.; Viafora, Laura A.; Godoy, Sebastián E.; Vera, Esteban
2018-05-01
In this work, we present a new technique to simultaneously reduce two major degradation artifacts found in mid-wavelength infrared microscopy imagery, namely the inherent focal-plane array nonuniformity noise and the scene defocus presented due to the point spread function of the infrared microscope. We correct both nuisances using a novel, recursive method that combines the constant range nonuniformity correction algorithm with a frame-by-frame deconvolution approach. The ability of the method to jointly compensate for both nonuniformity noise and blur is demonstrated using two different real mid-wavelength infrared microscopic video sequences, which were captured from two microscopic living organisms using a Janos-Sofradir mid-wavelength infrared microscopy setup. The performance of the proposed method is assessed on real and simulated infrared data by computing the root mean-square error and the roughness-laplacian pattern index, which was specifically developed for the present work.
Schmidt, M; Werther, B; Fuerstenau, N; Matthias, M; Melz, T
2001-04-09
A fiber-optic extrinsic Fabry-Perot interferometer strain sensor (EFPI-S) of ls = 2.5 cm sensor length using three-wavelength digital phase demodulation is demonstrated to exhibit <50 pm displacement resolution (<2nm/m strain resolution) when measuring the cross expansion of a PZT-ceramic plate. The sensing (single-mode downlead-) and reflecting fibers are fused into a 150/360 microm capillary fiber where the fusion points define the sensor length. Readout is performed using an improved version of the previously described three-wavelength digital phase demodulation method employing an arctan-phase stepping algorithm. In the resent experiments the strain sensitivity was varied via the mapping of the arctan - lookup table to the 16-Bit DA-converter range from 188.25 k /V (6 Volt range 1130 k ) to 11.7 k /Volt (range 70 k ).
Particle swarm optimization algorithm for optimizing assignment of blood in blood banking system.
Olusanya, Micheal O; Arasomwan, Martins A; Adewumi, Aderemi O
2015-01-01
This paper reports the performance of particle swarm optimization (PSO) for the assignment of blood to meet patients' blood transfusion requests for blood transfusion. While the drive for blood donation lingers, there is need for effective and efficient management of available blood in blood banking systems. Moreover, inherent danger of transfusing wrong blood types to patients, unnecessary importation of blood units from external sources, and wastage of blood products due to nonusage necessitate the development of mathematical models and techniques for effective handling of blood distribution among available blood types in order to minimize wastages and importation from external sources. This gives rise to the blood assignment problem (BAP) introduced recently in literature. We propose a queue and multiple knapsack models with PSO-based solution to address this challenge. Simulation is based on sets of randomly generated data that mimic real-world population distribution of blood types. Results obtained show the efficiency of the proposed algorithm for BAP with no blood units wasted and very low importation, where necessary, from outside the blood bank. The result therefore can serve as a benchmark and basis for decision support tools for real-life deployment.
Testing the accuracy of redshift-space group-finding algorithms
NASA Astrophysics Data System (ADS)
Frederic, James J.
1995-04-01
Using simulated redshift surveys generated from a high-resolution N-body cosmological structure simulation, we study algorithms used to identify groups of galaxies in redshift space. Two algorithms are investigated; both are friends-of-friends schemes with variable linking lengths in the radial and transverse dimenisons. The chief difference between the algorithms is in the redshift linking length. The algorithm proposed by Huchra & Geller (1982) uses a generous linking length designed to find 'fingers of god,' while that of Nolthenius & White (1987) uses a smaller linking length to minimize contamination by projection. We find that neither of the algorithms studied is intrinsically superior to the other; rather, the ideal algorithm as well as the ideal algorithm parameters depends on the purpose for which groups are to be studied. The Huchra & Geller algorithm misses few real groups, at the cost of including some spurious groups and members, while the Nolthenius & White algorithm misses high velocity dispersion groups and members but is less likely to include interlopers in its group assignments. Adjusting the parameters of either algorithm results in a trade-off between group accuracy and completeness. In a companion paper we investigate the accuracy of virial mass estimates and clustering properties of groups identified using these algorithms.
An Overview of Major Terrestrial, Celestial, and Temporal Coordinate Systems for Target Tracking
2016-08-10
interp and Subroutines) http://hpiers.obspm.fr/eop-pc/index.php?index=models General Software for Astronomy and Time Conversions The IAU’s Standards...of Fundamental Astronomy Software [146] http://www.iausofa.org Software for Optimal 2D Assignment An overview of 2D assignment algorithms; the... Astronomy (SOFA) library were used to change the epoch of the data. The points in red are at the epoch of the Hipparcos catalog (1994.25 TT), and 20
Spatial studies of planetary nebulae with IRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawkins, G.W.; Zuckerman, B.
1991-06-01
The infrared sizes at the four IRAS wavelengths of 57 planetaries, most with 20-60 arcsec optical size, are derived from spatial deconvolution of one-dimensional survey mode scans. Survey observations from multiple detectors and hours confirmed (HCON) observations are combined to increase the sampling to a rate that is sufficient for successful deconvolution. The Richardson-Lucy deconvolution algorithm is used to obtain an increase in resolution of a factor of about 2 or 3 from the normal IRAS detector sizes of 45, 45, 90, and 180 arcsec at wavelengths 12, 25, 60, and 100 microns. Most of the planetaries deconvolve at 12more » and 25 microns to sizes equal to or smaller than the optical size. Some of the planetaries with optical rings 60 arcsec or more in diameter show double-peaked IRAS profiles. Many, such as NGC 6720 and NGC 6543 show all infrared sizes equal to the optical size, while others indicate increasing infrared size with wavelength. Deconvolved IRAS profiles are presented for the 57 planetaries at nearly all wavelengths where IRAS flux densities are 1-2 Jy or higher. 60 refs.« less
Wavelength-switched phase interrogator for extrinsic Fabry-Perot interferometric sensors.
Xia, Ji; Xiong, Shuidong; Wang, Fuyin; Luo, Hong
2016-07-01
We report on phase interrogation of extrinsic Fabry-Perot interferometric (EFPI) sensors through a wavelength-switched unit with a polarization-maintaining fiber Bragg grating (PMFBG). The measurements at two wavelengths are first achieved in one total-optical path. The reflected peaks of the PMFBG with two natural wavelengths are in mutually perpendicular polarization detection, and they are switched through an electro-optic modulator at a high switching speed of 10 kHz. An ellipse fitting differential cross multiplication (EF-DCM) algorithm is proposed for interrogating the variation of the gap length of the EFPI sensors. The phase demodulation system has been demonstrated to recover a minimum phase of 0.42 μrad/Hz at the test frequency of 100 Hz with a stable intensity fluctuation level of ±0.8 dB. Three EFPI sensors with different cavity lengths are tested at the test frequency of 200 Hz, and the results indicate that the system can achieve the demodulation of EFPI sensors with different cavity lengths stably.
NASA Astrophysics Data System (ADS)
Gao, Xingbo
2010-03-01
We introduce a new preemptive scheduling technique for next-generation optical burst switching (OBS) networks considering the impact of cascaded wavelength conversions. It has been shown that when optical bursts are transmitted all optically from source to destination, each wavelength conversion performed along the lightpath may cause certain signal-to-noise deterioration. If the distortion of the signal quality becomes significant enough, the receiver would not be able to recover the original data. Accordingly, subject to this practical impediment, we improve a recently proposed fair channel scheduling algorithm to deal with the fairness problem and aim at burst loss reduction simultaneously in OBS environments. In our scheme, the dynamic priority associated with each burst is based on a constraint threshold and the number of already conducted wavelength conversions among other factors for this burst. When contention occurs, a new arriving superior burst may preempt another scheduled one according to their priorities. Extensive simulation results have shown that the proposed scheme further improves fairness and achieves burst loss reduction as well.
NASA Astrophysics Data System (ADS)
Wei, Chengying; Xiong, Cuilian; Liu, Huanlin
2017-12-01
Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.
A Genetic Algorithm Approach to Door Assignment in Breakbulk Terminals
DOT National Transportation Integrated Search
2001-08-23
Commercial vehicle regulation and enforcement is a necessary and important function of state governments. Through regulation, states promote highway safety, ensure that motor carriers have the proper licenses and operating permits, and collect taxes ...
DESIGNING PROCESSES FOR ENVIRONMENTAL PROBLEMS
Designing for the environment requires consideration of environmental impacts. The Generalized WAR Algorithm is the methodology that allows the user to evaluate the potential environmental impact of the design of a chemical process. In this methodology, chemicals are assigned val...
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-11-01
Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.
Constant Communities in Complex Networks
NASA Astrophysics Data System (ADS)
Chakraborty, Tanmoy; Srinivasan, Sriram; Ganguly, Niloy; Bhowmick, Sanjukta; Mukherjee, Animesh
2013-05-01
Identifying community structure is a fundamental problem in network analysis. Most community detection algorithms are based on optimizing a combinatorial parameter, for example modularity. This optimization is generally NP-hard, thus merely changing the vertex order can alter their assignments to the community. However, there has been less study on how vertex ordering influences the results of the community detection algorithms. Here we identify and study the properties of invariant groups of vertices (constant communities) whose assignment to communities are, quite remarkably, not affected by vertex ordering. The percentage of constant communities can vary across different applications and based on empirical results we propose metrics to evaluate these communities. Using constant communities as a pre-processing step, one can significantly reduce the variation of the results. Finally, we present a case study on phoneme network and illustrate that constant communities, quite strikingly, form the core functional units of the larger communities.
Performance tradeoffs in static and dynamic load balancing strategies
NASA Technical Reports Server (NTRS)
Iqbal, M. A.; Saltz, J. H.; Bokhart, S. H.
1986-01-01
The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.
Random synaptic feedback weights support error backpropagation for deep learning
NASA Astrophysics Data System (ADS)
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-11-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.
Hierarchical planning for a surface mounting machine placement.
Zeng, You-jiao; Ma, Deng-ze; Jin, Ye; Yan, Jun-qi
2004-11-01
For a surface mounting machine (SMM) in printed circuit board (PCB) assembly line, there are four problems, e.g. CAD data conversion, nozzle selection, feeder assignment and placement sequence determination. A hierarchical planning for them to maximize the throughput rate of an SMM is presented here. To minimize set-up time, a CAD data conversion system was first applied that could automatically generate the data for machine placement from CAD design data files. Then an effective nozzle selection approach implemented to minimize the time of nozzle changing. And then, to minimize picking time, an algorithm for feeder assignment was used to make picking multiple components simultaneously as much as possible. Finally, in order to shorten pick-and-place time, a heuristic algorithm was used to determine optimal component placement sequence according to the decided feeder positions. Experiments were conducted on a four head SMM. The experimental results were used to analyse the assembly line performance.
Random synaptic feedback weights support error backpropagation for deep learning
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-01-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning. PMID:27824044
Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A.
2014-12-01
The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is appliedmore » to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A.
The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is appliedmore » to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.« less
Community Detection on the GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naim, Md; Manne, Fredrik; Halappanavar, Mahantesh
We present and evaluate a new GPU algorithm based on the Louvain method for community detection. Our algorithm is the first for this problem that parallelizes the access to individual edges. In this way we can fine tune the load balance when processing networks with nodes of highly varying degrees. This is achieved by scaling the number of threads assigned to each node according to its degree. Extensive experiments show that we obtain speedups up to a factor of 270 compared to the sequential algorithm. The algorithm consistently outperforms other recent shared memory implementations and is only one order ofmore » magnitude slower than the current fastest parallel Louvain method running on a Blue Gene/Q supercomputer using more than 500K threads.« less
Intraocular scattering compensation in retinal imaging
Christaras, Dimitrios; Ginis, Harilaos; Pennos, Alexandros; Artal, Pablo
2016-01-01
Intraocular scattering affects fundus imaging in a similar way that affects vision; it causes a decrease in contrast which depends on both the intrinsic scattering of the eye but also on the dynamic range of the image. Consequently, in cases where the absolute intensity in the fundus image is important, scattering can lead to a wrong estimation. In this paper, a setup capable of acquiring fundus images and estimating objectively intraocular scattering was built, and the acquired images were then used for scattering compensation in fundus imaging. The method consists of two parts: first, reconstruct the individual’s wide-angle Point Spread Function (PSF) at a specific wavelength to be used within an enhancement algorithm on an acquired fundus image to compensate for scattering. As a proof of concept, a single pass measurement with a scatter filter was carried out first and the complete algorithm of the PSF reconstruction and the scattering compensation was applied. The advantage of the single pass test is that one can compare the reconstructed image with the original one and see the validity, thus testing the efficiency of the method. Following the test, the algorithm was applied in actual fundus images in human eyes and the effect on the contrast of the image before and after the compensation was compared. The comparison showed that depending on the wavelength, contrast can be reduced by 8.6% under certain conditions. PMID:27867710
Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk
2018-04-20
Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.
A priori mesh grading for the numerical calculation of the head-related transfer functions
Ziegelwanger, Harald; Kreuzer, Wolfgang; Majdak, Piotr
2017-01-01
Head-related transfer functions (HRTFs) describe the directional filtering of the incoming sound caused by the morphology of a listener’s head and pinnae. When an accurate model of a listener’s morphology exists, HRTFs can be calculated numerically with the boundary element method (BEM). However, the general recommendation to model the head and pinnae with at least six elements per wavelength renders the BEM as a time-consuming procedure when calculating HRTFs for the full audible frequency range. In this study, a mesh preprocessing algorithm is proposed, viz., a priori mesh grading, which reduces the computational costs in the HRTF calculation process significantly. The mesh grading algorithm deliberately violates the recommendation of at least six elements per wavelength in certain regions of the head and pinnae and varies the size of elements gradually according to an a priori defined grading function. The evaluation of the algorithm involved HRTFs calculated for various geometric objects including meshes of three human listeners and various grading functions. The numerical accuracy and the predicted sound-localization performance of calculated HRTFs were analyzed. A-priori mesh grading appeared to be suitable for the numerical calculation of HRTFs in the full audible frequency range and outperformed uniform meshes in terms of numerical errors, perception based predictions of sound-localization performance, and computational costs. PMID:28239186
NASA Astrophysics Data System (ADS)
Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo
2016-10-01
Laser-driven accelerators gained an increased attention over the past decades. Typical modeling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) simulations. PIC simulations, however, are very computationally expensive due to the disparity of the relevant scales ranging from the laser wavelength, in the micrometer range, to the acceleration length, currently beyond the ten centimeter range. To minimize the gap between these despair scales the ponderomotive guiding center (PGC) algorithm is a promising approach. By describing the evolution of the laser pulse envelope separately, only the scales larger than the plasma wavelength are required to be resolved in the PGC algorithm, leading to speedups in several orders of magnitude. Previous work was limited to two dimensions. Here we present the implementation of the 3D version of a PGC solver into the massively parallel, fully relativistic PIC code OSIRIS. We extended the solver to include periodic boundary conditions and parallelization in all spatial dimensions. We present benchmarks for distributed and shared memory parallelization. We also discuss the stability of the PGC solver.
Remote Sensing of Suspended Sediments and Shallow Coastal Waters
NASA Technical Reports Server (NTRS)
Li, Rong-Rong; Kaufman, Yoram J.; Gao, Bo-Cai; Davis, Curtiss O.
2002-01-01
Ocean color sensors were designed mainly for remote sensing of chlorophyll concentrations over the clear open oceanic areas (case 1 water) using channels between 0.4 and 0.86 micrometers. The Moderate Resolution Imaging Spectroradiometer (MODIS) launched on the NASA Terra and Aqua Spacecrafts is equipped with narrow channels located within a wider wavelength range between 0.4 and 2.5 micrometers for a variety of remote sensing applications. The wide spectral range can provide improved capabilities for remote sensing of the more complex and turbid coastal waters (case 2 water) and for improved atmospheric corrections for Ocean scenes. In this article, we describe an empirical algorithm that uses this wide spectral range to identifying areas with suspended sediments in turbid waters and shallow waters with bottom reflections. The algorithm takes advantage of the strong water absorption at wavelengths longer than 1 micrometer that does not allow illumination of sediments in the water or a shallow ocean floor. MODIS data acquired over the east coast of China, west coast of Africa, Arabian Sea, Mississippi Delta, and west coast of Florida are used in this study.
Iris biometric system design using multispectral imaging
NASA Astrophysics Data System (ADS)
Widhianto, Benedictus Yohanes Bagus Y. B.; Nasution, Aulia M. T.
2016-11-01
An identity recognition system is a vital component that cannot be separated from life, iris biometric is one of the biometric that has the best accuracy reaching 99%. Usually, iris biometric systems use infrared spectrum lighting to reduce discomfort caused by radiation when the eye is given direct light, while the eumelamin that is forming the iris has the most flourescent radiation when given a spectrum of visible light. This research will be conducted by detecting iris wavelengths of 850 nm, 560 nm, and 590 nm, where the detection algorithm will be using Daugman algorithm by using a Gabor wavelet extraction feature, and matching feature using a Hamming distance. Results generated will be analyzed to identify how much differences there are, and to improve the accuracy of the multispectral biometric system and as a detector of the authenticity of the iris. The results obtained from the analysis of wavelengths 850 nm, 560 nm, and 590 nm respectively has an accuracy of 99,35 , 97,5 , 64,5 with a matching score of 0,26 , 0,23 , 0,37.
Total ozone column retrieval from UV-MFRSR irradiance measurements: evaluation at Mauna Loa station
NASA Astrophysics Data System (ADS)
Zempila, Melina Maria; Fragkos, Konstantinos; Davis, John; Sun, Zhibin; Chen, Maosi; Gao, Wei
2017-09-01
The USDA UV-B Monitoring and Research Program (UVMRP) comprises of 36 climatological sites along with 4 long-duration research sites, in 27 states, one Canadian province, and the south island of New Zealand. Each station is equipped with an Ultraviolet multi-filter rotating shadowband radiometer (UV-MFRSR) which can provide response-weighted irradiances at 7 wavelengths (300, 305.5, 311.4, 317.6, 325.4, and 368 nm) with a nominal full width at half maximun of 2 nm. These UV irradiance data from the long term monitoring station at Mauna Loa, Hawaii, are used as input to a retrieval algorithm in order to derive high time frequency total ozone columns. The sensitivity of the algorithm to the different wavelength inputs is tested and the uncertainty of the retrievals is assessed based on error propagation methods. For the validation of the method, collocated hourly ozone data from the Dobson Network of the Global Monitoring Division (GMD) of the Earth System Radiation Laboratory (ESRL) under the jurisdiction of the US National Oceanic & Atmospheric Administration (NOAA) for the period 2010-2015 were used.
Plural-wavelength flame detector that discriminates between direct and reflected radiation
NASA Technical Reports Server (NTRS)
Hall, Gregory H. (Inventor); Barnes, Heidi L. (Inventor); Medelius, Pedro J. (Inventor); Simpson, Howard J. (Inventor); Smith, Harvey S. (Inventor)
1997-01-01
A flame detector employs a plurality of wavelength selective radiation detectors and a digital signal processor programmed to analyze each of the detector signals, and determine whether radiation is received directly from a small flame source that warrants generation of an alarm. The processor's algorithm employs a normalized cross-correlation analysis of the detector signals to discriminate between radiation received directly from a flame and radiation received from a reflection of a flame to insure that reflections will not trigger an alarm. In addition, the algorithm employs a Fast Fourier Transform (FFT) frequency spectrum analysis of one of the detector signals to discriminate between flames of different sizes. In a specific application, the detector incorporates two infrared (IR) detectors and one ultraviolet (UV) detector for discriminating between a directly sensed small hydrogen flame, and reflections from a large hydrogen flame. The signals generated by each of the detectors are sampled and digitized for analysis by the digital signal processor, preferably 250 times a second. A sliding time window of approximately 30 seconds of detector data is created using FIFO memories.
Insausti, Matías; Gomes, Adriano A; Cruz, Fernanda V; Pistonesi, Marcelo F; Araujo, Mario C U; Galvão, Roberto K H; Pereira, Claudete F; Band, Beatriz S F
2012-08-15
This paper investigates the use of UV-vis, near infrared (NIR) and synchronous fluorescence (SF) spectrometries coupled with multivariate classification methods to discriminate biodiesel samples with respect to the base oil employed in their production. More specifically, the present work extends previous studies by investigating the discrimination of corn-based biodiesel from two other biodiesel types (sunflower and soybean). Two classification methods are compared, namely full-spectrum SIMCA (soft independent modelling of class analogies) and SPA-LDA (linear discriminant analysis with variables selected by the successive projections algorithm). Regardless of the spectrometric technique employed, full-spectrum SIMCA did not provide an appropriate discrimination of the three biodiesel types. In contrast, all samples were correctly classified on the basis of a reduced number of wavelengths selected by SPA-LDA. It can be concluded that UV-vis, NIR and SF spectrometries can be successfully employed to discriminate corn-based biodiesel from the two other biodiesel types, but wavelength selection by SPA-LDA is key to the proper separation of the classes. Copyright © 2012 Elsevier B.V. All rights reserved.
Emission line spectra of S VII ? S XIV in the 20 ? 75 ? wavelength region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lepson, J K; Beiersdorfer, P; Behar, E
As part of a larger project to complete a comprehensive catalogue of astrophysically relevant emission lines in support of new-generation X-ray observatories using the Lawrence Livermore electron beam ion traps EBIT-I and EBIT-II, the authors present observations of sulfur lines in the soft X-ray and extreme ultraviolet regions. The database includes wavelength measurements with standard errors, relative intensities, and line assignments for 127 transitions of S VII through S XIV between 20 and 75 {angstrom}. The experimental data are complemented with a full set of calculations using the Hebrew University Lawrence Livermore Atomic Code (HULLAC). A comparison of the laboratorymore » data with Chandra measurements of Procyon allows them to identify S VII-S XI lines.« less
Characterization of the Nimbus-7 SBUV radiometer for the long-term monitoring of stratospheric ozone
NASA Technical Reports Server (NTRS)
Cebula, Richard P.; Park, H.; Heath, D. F.
1988-01-01
Precise knowledge of in-orbit sensitivity change is critical for the successful monitoring of stratospheric ozone by satellite-based remote sensors. This paper evaluates those aspects of the in-flight operation that influence the long-term stability of the upper stratospheric ozone measurements made by the Nimbus-7 SBUV spectroradiometer and chronicles methods used to maintain the long-term albedo calibration of this UV sensor. It is shown that the instrument's calibration for the ozone measurement, the albedo calibration, has been maintained over the first 6 yr of operation to an accuracy of approximately + or - 2 percent. The instrument's wavelength calibration is shown to drift linearly with time. The knowledge of the SBUV wavelength assignment is maintained to a 0.02-nm precision.
DNATCO: assignment of DNA conformers at dnatco.org.
Černý, Jiří; Božíková, Paulína; Schneider, Bohdan
2016-07-08
The web service DNATCO (dnatco.org) classifies local conformations of DNA molecules beyond their traditional sorting to A, B and Z DNA forms. DNATCO provides an interface to robust algorithms assigning conformation classes called NTC: to dinucleotides extracted from DNA-containing structures uploaded in PDB format version 3.1 or above. The assigned dinucleotide NTC: classes are further grouped into DNA structural alphabet NTA: , to the best of our knowledge the first DNA structural alphabet. The results are presented at two levels: in the form of user friendly visualization and analysis of the assignment, and in the form of a downloadable, more detailed table for further analysis offline. The website is free and open to all users and there is no login requirement. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena
2017-02-01
In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
NASA Astrophysics Data System (ADS)
Bhattacharya, Bhaswati; Jana, Barnali; Bose, Debosreeta; Chattopadhyay, Nitin
2011-01-01
Multiple emissions have been observed from benzil under different conditions in solutions at room temperature as well as in low temperature glass matrices at 77 K. Low temperature emission has been monitored in rigid matrices frozen under different conditions of illumination. Steady state and time-resolved results together with the ab initio quantum chemical calculations provide, for the first time, the assignments of the different fluorescence bands to the different geometries and/or electronic states of the fluorophore molecule. It is revealed that the skew form of benzil emits from the first (S1) as well as the second excited singlet (S2) states depending on the excitation wavelength, while the relaxed transplanar conformer fluoresces only from the S1 state. The yet unexplored emission band peaking at around 360 nm has been assigned to originate from the S2 state. Ab initio calculations using the density functional theory at B3LYP/6-31G** level corroborate well with the experimental observations.
Bhattacharya, Bhaswati; Jana, Barnali; Bose, Debosreeta; Chattopadhyay, Nitin
2011-01-28
Multiple emissions have been observed from benzil under different conditions in solutions at room temperature as well as in low temperature glass matrices at 77 K. Low temperature emission has been monitored in rigid matrices frozen under different conditions of illumination. Steady state and time-resolved results together with the ab initio quantum chemical calculations provide, for the first time, the assignments of the different fluorescence bands to the different geometries and∕or electronic states of the fluorophore molecule. It is revealed that the skew form of benzil emits from the first (S(1)) as well as the second excited singlet (S(2)) states depending on the excitation wavelength, while the relaxed transplanar conformer fluoresces only from the S(1) state. The yet unexplored emission band peaking at around 360 nm has been assigned to originate from the S(2) state. Ab initio calculations using the density functional theory at B3LYP∕6-31G∗∗ level corroborate well with the experimental observations.
Towards Automated Structure-Based NMR Resonance Assignment
NASA Astrophysics Data System (ADS)
Jang, Richard; Gao, Xin; Li, Ming
We propose a general framework for solving the structure-based NMR backbone resonance assignment problem. The core is a novel 0-1 integer programming model that can start from a complete or partial assignment, generate multiple assignments, and model not only the assignment of spins to residues, but also pairwise dependencies consisting of pairs of spins to pairs of residues. It is still a challenge for automated resonance assignment systems to perform the assignment directly from spectra without any manual intervention. To test the feasibility of this for structure-based assignment, we integrated our system with our automated peak picking and sequence-based resonance assignment system to obtain an assignment for the protein TM1112 with 91% recall and 99% precision without manual intervention. Since using a known structure has the potential to allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data, we work towards the goal of automated structure-based assignment using only such labeled data. Our system reduced the assignment error of Xiong-Pandurangan-Bailey-Kellogg's contact replacement (CR) method, which to our knowledge is the most error-tolerant method for this problem, by 5 folds on average. By using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for Ubiquitin, where the type prediction accuracy is 83%, we achieved 91% assignment accuracy, compared to the 59% accuracy that was obtained without correcting for typing errors.
Wang, Lu; Pu, Hongbin; Sun, Da-Wen
2016-01-15
Chlorophyll a (Chl-a) is regarded as one of the important components to estimate water quality and sustainability of freshwater aquaculture operations. In the current study, a hyperspectral imaging (HSI) system was used to determine the effect of season models on the accuracy of Chl-a estimation in outdoor aquaculture ponds. A visible and near infrared hyperspectral imaging system (400-1000nm) was used to measure surface spectral reflectance (R) of water collected from outdoor ponds in four different seasons. Firstly, values of surface spectral reflectance (R) were amplified by a baseline correction (740nm). Two-band, three-band and four-band spectral reflectance were used to compute Chl-a concentration and a new cross band ratio algorithm with six wavelengths was proposed in the study. Results indicated that two-band model established based on reflectance ratio (R702/R666) had better performances for Chl-a prediction with determination coefficients (r(2)) of 0.908 than those by (R675(-1)-R691(-1))*R743 and (R675(-1)-R691(-1))/(R743(-1)-R691(-1)) models with r(2) of 0.902 and 0.896, respectively. Six optimal wavelengths (410, 682, 691, 966, 972, and 997) were identified using successive projections algorithm (SPA). The optimized regression model (R410(-1)-R966(-1))/(R682(-1)-R972(-1))/(R691(-1)-R997(-1)) showed best result with r(2) of 0.961 for Chl-a prediction. Model of cross band ratio algorithm with six wavelengths was mapped to each pixel in the image to display Chl-a component in outdoor ponds under four different seasons. The current study showed that it was feasible to use the HSI system for monitoring the influence of seasons for outdoor aquaculture water quality. Copyright © 2015 Elsevier B.V. All rights reserved.
Balabin, Roman M; Smirnov, Sergey V
2011-04-29
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Automated and assisted RNA resonance assignment using NMR chemical shift statistics
Aeschbacher, Thomas; Schmidt, Elena; Blatter, Markus; Maris, Christophe; Duss, Olivier; Allain, Frédéric H.-T.; Güntert, Peter; Schubert, Mario
2013-01-01
The three-dimensional structure determination of RNAs by NMR spectroscopy relies on chemical shift assignment, which still constitutes a bottleneck. In order to develop more efficient assignment strategies, we analysed relationships between sequence and 1H and 13C chemical shifts. Statistics of resonances from regularly Watson–Crick base-paired RNA revealed highly characteristic chemical shift clusters. We developed two approaches using these statistics for chemical shift assignment of double-stranded RNA (dsRNA): a manual approach that yields starting points for resonance assignment and simplifies decision trees and an automated approach based on the recently introduced automated resonance assignment algorithm FLYA. Both strategies require only unlabeled RNAs and three 2D spectra for assigning the H2/C2, H5/C5, H6/C6, H8/C8 and H1′/C1′ chemical shifts. The manual approach proved to be efficient and robust when applied to the experimental data of RNAs with a size between 20 nt and 42 nt. The more advanced automated assignment approach was successfully applied to four stem-loop RNAs and a 42 nt siRNA, assigning 92–100% of the resonances from dsRNA regions correctly. This is the first automated approach for chemical shift assignment of non-exchangeable protons of RNA and their corresponding 13C resonances, which provides an important step toward automated structure determination of RNAs. PMID:23921634
High-resolution Laboratory Measurements of Coronal Lines near the Fe IX Line at 171 Å
NASA Astrophysics Data System (ADS)
Beiersdorfer, Peter; Träbert, Elmar
2018-02-01
We present high-resolution laboratory measurements in the spectral region between 165 and 175 Å that focus on the emission from various ions of C, O, F, Ne, S, Ar, Fe, and Ni. This wavelength region is centered on the λ171 Fe IX channel of the Atmospheric Imaging Assembly on the Solar Dynamics Observatory, and we place special emphasis on the weaker emission lines of Fe IX predicted in this region. In general, our measurements show a multitude of weak lines missing in the current databases, where the emission lines of Ni are probably most in need of further identification and reclassification. We also find that the wavelengths of some of the known lines need updating. Using the multi-reference Møller–Plesset method for wavelength predictions and collisional-radiative modeling of the line intensities, we have made tentative assignments of more than a dozen lines to the spectrum of Fe IX, some of which have formerly been identified as Fe VII, Fe XIV, or Fe XVI lines. Several Fe features remain unassigned, although they appear to be either Fe VII or Fe X lines. Further work will be needed to complete and correct the spectral line lists in this wavelength region.
Tunable thin film filters for intelligent WDM networks
NASA Astrophysics Data System (ADS)
Cahill, Michael; Bartolini, Glenn; Lourie, Mark; Domash, Lawrence
2006-08-01
Optical transmission systems have evolved rapidly in recent years with the emergence of new technologies for gain management, wavelength multiplexing, tunability, and switching. WDM networks are increasingly expected to be agile, flexible, and reconfigurable which in turn has led to a need for monitoring to be more widely distributed within the network. Automation of many actions performed on these networks, such as channel provisioning and power balancing, can only be realized by the addition of optical channel monitors (OCMs). These devices provide information about the optical transmission system including the number of optical channels, channel identification, wavelength, power, and in some cases optical signal-to-noise ratio (OSNR). Until recently OCMs were costly and bulky and thus the number of OCMs used in optical networks was often kept to a minimum. We describe a family of tunable thin film filters which have greatly reduced the cost and physical footprint of channel monitors, making possible 'monitoring everywhere' for intelligent optical networks which can serve long haul, metro and access requirements from a single technology platform. As examples of specific applications we discuss network issues such as auto provisioning, wavelength collision avoidance, power balancing, OSNR balancing, gain equalization, alien wavelength recognition, interoperability, and other requirements assigned to the emerging concept of an Optical Control Plane.
Multicriteria meta-heuristics for AGV dispatching control based on computational intelligence.
Naso, David; Turchiano, Biagio
2005-04-01
In many manufacturing environments, automated guided vehicles are used to move the processed materials between various pickup and delivery points. The assignment of vehicles to unit loads is a complex problem that is often solved in real-time with simple dispatching rules. This paper proposes an automated guided vehicles dispatching approach based on computational intelligence. We adopt a fuzzy multicriteria decision strategy to simultaneously take into account multiple aspects in every dispatching decision. Since the typical short-term view of dispatching rules is one of the main limitations of such real-time assignment heuristics, we also incorporate in the multicriteria algorithm a specific heuristic rule that takes into account the empty-vehicle travel on a longer time-horizon. Moreover, we also adopt a genetic algorithm to tune the weights associated to each decision criteria in the global decision algorithm. The proposed approach is validated by means of a comparison with other dispatching rules, and with other recently proposed multicriteria dispatching strategies also based on computational Intelligence. The analysis of the results obtained by the proposed dispatching approach in both nominal and perturbed operating conditions (congestions, faults) confirms its effectiveness.
Parallel processing considerations for image recognition tasks
NASA Astrophysics Data System (ADS)
Simske, Steven J.
2011-01-01
Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.
A model of cloud application assignments in software-defined storages
NASA Astrophysics Data System (ADS)
Bolodurina, Irina P.; Parfenov, Denis I.; Polezhaev, Petr N.; E Shukhman, Alexander
2017-01-01
The aim of this study is to analyze the structure and mechanisms of interaction of typical cloud applications and to suggest the approaches to optimize their placement in storage systems. In this paper, we describe a generalized model of cloud applications including the three basic layers: a model of application, a model of service, and a model of resource. The distinctive feature of the model suggested implies analyzing cloud resources from the user point of view and from the point of view of a software-defined infrastructure of the virtual data center (DC). The innovation character of this model is in describing at the same time the application data placements, as well as the state of the virtual environment, taking into account the network topology. The model of software-defined storage has been developed as a submodel within the resource model. This model allows implementing the algorithm for control of cloud application assignments in software-defined storages. Experimental researches returned this algorithm decreases in cloud application response time and performance growth in user request processes. The use of software-defined data storages allows the decrease in the number of physical store devices, which demonstrates the efficiency of our algorithm.
A new mutually reinforcing network node and link ranking algorithm
Wang, Zhenghua; Dueñas-Osorio, Leonardo; Padgett, Jamie E.
2015-01-01
This study proposes a novel Normalized Wide network Ranking algorithm (NWRank) that has the advantage of ranking nodes and links of a network simultaneously. This algorithm combines the mutual reinforcement feature of Hypertext Induced Topic Selection (HITS) and the weight normalization feature of PageRank. Relative weights are assigned to links based on the degree of the adjacent neighbors and the Betweenness Centrality instead of assigning the same weight to every link as assumed in PageRank. Numerical experiment results show that NWRank performs consistently better than HITS, PageRank, eigenvector centrality, and edge betweenness from the perspective of network connectivity and approximate network flow, which is also supported by comparisons with the expensive N-1 benchmark removal criteria based on network efficiency. Furthermore, it can avoid some problems, such as the Tightly Knit Community effect, which exists in HITS. NWRank provides a new inexpensive way to rank nodes and links of a network, which has practical applications, particularly to prioritize resource allocation for upgrade of hierarchical and distributed networks, as well as to support decision making in the design of networks, where node and link importance depend on a balance of local and global integrity. PMID:26492958
Dual-excitation wavelength resonance Raman explosives detector
NASA Astrophysics Data System (ADS)
Yellampalle, Balakishore; Sluch, Mikhail; Wu, Hai-Shan; Martin, Robert; McCormick, William; Ice, Robert; Lemoff, Brian E.
2013-05-01
Deep-ultraviolet resonance Raman spectroscopy (DUVRRS) is a promising approach to stand-off detection of explosive traces due to: 1) resonant enhancement of Raman cross-section, 2) λ-4-cross-section enhancement, and 3) fluorescence and solar background free signatures. For trace detection, these signal enhancements more than offset the small penetration depth due to DUV absorption. A key challenge for stand-off sensors is to distinguish explosives, with high confidence, from a myriad of unknown background materials that may have interfering spectral peaks. To address this, we are developing a stand-off explosive sensor using DUVRRS with two simultaneous DUV excitation wavelengths. Due to complex interplay of resonant enhancement, self-absorption and laser penetration depth, significant amplitude variation is observed between corresponding Raman bands with different excitation wavelengths. These variations with excitation wavelength provide an orthogonal signature that complements the traditional Raman signature to improve specificity relative to single-excitation-wavelength techniques. As part of this effort, we are developing two novel CW DUV lasers, which have potential to be compact, and a compact dual-band high throughput DUV spectrometer, capable of simultaneous detection of Raman spectra in two spectral windows. We have also developed a highly sensitive algorithm for the detection of explosives under low signal-to-noise situations.
Pu, Hongbin; Sun, Da-Wen; Ma, Ji; Cheng, Jun-Hu
2015-01-01
The potential of visible and near infrared hyperspectral imaging was investigated as a rapid and nondestructive technique for classifying fresh and frozen-thawed meats by integrating critical spectral and image features extracted from hyperspectral images in the region of 400-1000 nm. Six feature wavelengths (400, 446, 477, 516, 592 and 686 nm) were identified using uninformative variable elimination and successive projections algorithm. Image textural features of the principal component images from hyperspectral images were obtained using histogram statistics (HS), gray level co-occurrence matrix (GLCM) and gray level-gradient co-occurrence matrix (GLGCM). By these spectral and textural features, probabilistic neural network (PNN) models for classification of fresh and frozen-thawed pork meats were established. Compared with the models using the optimum wavelengths only, optimum wavelengths with HS image features, and optimum wavelengths with GLCM image features, the model integrating optimum wavelengths with GLGCM gave the highest classification rate of 93.14% and 90.91% for calibration and validation sets, respectively. Results indicated that the classification accuracy can be improved by combining spectral features with textural features and the fusion of critical spectral and textural features had better potential than single spectral extraction in classifying fresh and frozen-thawed pork meat. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Comparison of Several Techniques to Assign Heights to Cloud Tracers.
NASA Astrophysics Data System (ADS)
Nieman, Steven J.; Schmetz, Johannes; Menzel, W. Paul
1993-09-01
Satellite-derived cloud-motion vector (CMV) production has been troubled by inaccurate height assignment of cloud tracers, especially in thin semitransparent clouds. This paper presents the results of an intercomparison of current operational height assignment techniques. Currently, heights are assigned by one of three techniques when the appropriate spectral radiance measurements are available. The infrared window (IRW) technique compares measured brightness temperatures to forecast temperature profiles and thus infers opaque cloud levels. In semitransparent or small subpixel clouds, the carbon dioxide (CO2) technique uses the ratio of radiances from different layers of the atmosphere to infer the correct cloud height. In the water vapor (H2O) technique, radiances influenced by upper-tropospheric moisture and IRW radiances are measured for several pixels viewing different cloud amounts, and their linear relationship is used to extrapolate the correct cloud height. The results presented in this paper suggest that the H2O technique is a viable alternative to the CO2 technique for inferring the heights of semitransparent cloud elements. This is important since future National Environmental Satellite, Data, and Information Service (NESDIS) operations will have to rely on H20-derived cloud-height assignments in the wind field determinations with the next operational geostationary satellite. On a given day, the heights from the two approaches compare to within 60 110 hPa rms; drier atmospheric conditions tend to reduce the effectiveness of the H2O technique. By inference one can conclude that the present height algorithms used operationally at NESDIS (with the C02 technique) and at the European Satellite Operations Center (ESOC) (with their version of the H20 technique) are providing similar results. Sample wind fields produced with the ESOC and NESDIS algorithms using Meteosat-4 data show good agreement.
Storage assignment optimization in a multi-tier shuttle warehousing system
NASA Astrophysics Data System (ADS)
Wang, Yanyan; Mou, Shandong; Wu, Yaohua
2016-03-01
The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP), which has been widely applied in the conventional automated storage and retrieval system(AS/RS). However, the previous mathematical models in conventional AS/RS do not match multi-tier shuttle warehousing systems(MSWS) because the characteristics of parallel retrieval in multiple tiers and progressive vertical movement destroy the foundation of TSP. In this study, a two-stage open queuing network model in which shuttles and a lift are regarded as servers at different stages is proposed to analyze system performance in the terms of shuttle waiting period (SWP) and lift idle period (LIP) during transaction cycle time. A mean arrival time difference matrix for pairwise stock keeping units(SKUs) is presented to determine the mean waiting time and queue length to optimize the storage assignment problem on the basis of SKU correlation. The decomposition method is applied to analyze the interactions among outbound task time, SWP, and LIP. The ant colony clustering algorithm is designed to determine storage partitions using clustering items. In addition, goods are assigned for storage according to the rearranging permutation and the combination of storage partitions in a 2D plane. This combination is derived based on the analysis results of the queuing network model and on three basic principles. The storage assignment method and its entire optimization algorithm method as applied in a MSWS are verified through a practical engineering project conducted in the tobacco industry. The applying results show that the total SWP and LIP can be reduced effectively to improve the utilization rates of all devices and to increase the throughput of the distribution center.
Hernandez, Penni; Podchiyska, Tanya; Weber, Susan; Ferris, Todd; Lowe, Henry
2009-11-14
The Stanford Translational Research Integrated Database Environment (STRIDE) clinical data warehouse integrates medication information from two Stanford hospitals that use different drug representation systems. To merge this pharmacy data into a single, standards-based model supporting research we developed an algorithm to map HL7 pharmacy orders to RxNorm concepts. A formal evaluation of this algorithm on 1.5 million pharmacy orders showed that the system could accurately assign pharmacy orders in over 96% of cases. This paper describes the algorithm and discusses some of the causes of failures in mapping to RxNorm.
Using recurrence plot analysis for software execution interpretation and fault detection
NASA Astrophysics Data System (ADS)
Mosdorf, M.
2015-09-01
This paper shows a method targeted at software execution interpretation and fault detection using recurrence plot analysis. In in the proposed approach recurrence plot analysis is applied to software execution trace that contains executed assembly instructions. Results of this analysis are subject to further processing with PCA (Principal Component Analysis) method that simplifies number coefficients used for software execution classification. This method was used for the analysis of five algorithms: Bubble Sort, Quick Sort, Median Filter, FIR, SHA-1. Results show that some of the collected traces could be easily assigned to particular algorithms (logs from Bubble Sort and FIR algorithms) while others are more difficult to distinguish.
Crammer, Koby; Singer, Yoram
2005-01-01
We discuss the problem of ranking instances. In our framework, each instance is associated with a rank or a rating, which is an integer in 1 to k. Our goal is to find a rank-prediction rule that assigns each instance a rank that is as close as possible to the instance's true rank. We discuss a group of closely related online algorithms, analyze their performance in the mistake-bound model, and prove their correctness. We describe two sets of experiments, with synthetic data and with the EachMovie data set for collaborative filtering. In the experiments we performed, our algorithms outperform online algorithms for regression and classification applied to ranking.
Wavelength scanning achieves pixel super-resolution in holographic on-chip microscopy
NASA Astrophysics Data System (ADS)
Luo, Wei; Göröcs, Zoltan; Zhang, Yibo; Feizi, Alborz; Greenbaum, Alon; Ozcan, Aydogan
2016-03-01
Lensfree holographic on-chip imaging is a potent solution for high-resolution and field-portable bright-field imaging over a wide field-of-view. Previous lensfree imaging approaches utilize a pixel super-resolution technique, which relies on sub-pixel lateral displacements between the lensfree diffraction patterns and the image sensor's pixel-array, to achieve sub-micron resolution under unit magnification using state-of-the-art CMOS imager chips, commonly used in e.g., mobile-phones. Here we report, for the first time, a wavelength scanning based pixel super-resolution technique in lensfree holographic imaging. We developed an iterative super-resolution algorithm, which generates high-resolution reconstructions of the specimen from low-resolution (i.e., under-sampled) diffraction patterns recorded at multiple wavelengths within a narrow spectral range (e.g., 10-30 nm). Compared with lateral shift-based pixel super-resolution, this wavelength scanning approach does not require any physical shifts in the imaging setup, and the resolution improvement is uniform in all directions across the sensor-array. Our wavelength scanning super-resolution approach can also be integrated with multi-height and/or multi-angle on-chip imaging techniques to obtain even higher resolution reconstructions. For example, using wavelength scanning together with multi-angle illumination, we achieved a halfpitch resolution of 250 nm, corresponding to a numerical aperture of 1. In addition to pixel super-resolution, the small scanning steps in wavelength also enable us to robustly unwrap phase, revealing the specimen's optical path length in our reconstructed images. We believe that this new wavelength scanning based pixel super-resolution approach can provide competitive microscopy solutions for high-resolution and field-portable imaging needs, potentially impacting tele-pathology applications in resource-limited-settings.
Computing Role Assignments of Proper Interval Graphs in Polynomial Time
NASA Astrophysics Data System (ADS)
Heggernes, Pinar; van't Hof, Pim; Paulusma, Daniël
A homomorphism from a graph G to a graph R is locally surjective if its restriction to the neighborhood of each vertex of G is surjective. Such a homomorphism is also called an R-role assignment of G. Role assignments have applications in distributed computing, social network theory, and topological graph theory. The Role Assignment problem has as input a pair of graphs (G,R) and asks whether G has an R-role assignment. This problem is NP-complete already on input pairs (G,R) where R is a path on three vertices. So far, the only known non-trivial tractable case consists of input pairs (G,R) where G is a tree. We present a polynomial time algorithm that solves Role Assignment on all input pairs (G,R) where G is a proper interval graph. Thus we identify the first graph class other than trees on which the problem is tractable. As a complementary result, we show that the problem is Graph Isomorphism-hard on chordal graphs, a superclass of proper interval graphs and trees.
DOT National Transportation Integrated Search
2011-01-01
This study develops an enhanced transportation planning framework by augmenting the sequential four-step : planning process with post-processing techniques. The post-processing techniques are incorporated through a feedback : mechanism and aim to imp...
Finding minimum-quotient cuts in planar graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, J.K.; Phillips, C.A.
Given a graph G = (V, E) where each vertex v {element_of} V is assigned a weight w(v) and each edge e {element_of} E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and {bar S} is c(S, {bar S})/min{l_brace}w(S), w(S){r_brace}, where c(S, {bar S}) is the sum of the costs of the edges crossing the cut and w(S) and w({bar S}) are the sum of the weights of the vertices in S and {bar S}, respectively. The problem of finding a cut whose quotient is minimum for a graph hasmore » in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,{bar S}) minimizing c(S,{bar S}) subject to the constraint bW {le} w(S) {le} (1 {minus} b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b {le} {1/2}. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao`s algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao`s most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less
Finding minimum-quotient cuts in planar graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, J.K.; Phillips, C.A.
Given a graph G = (V, E) where each vertex v [element of] V is assigned a weight w(v) and each edge e [element of] E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and [bar S] is c(S, [bar S])/min[l brace]w(S), w(S)[r brace], where c(S, [bar S]) is the sum of the costs of the edges crossing the cut and w(S) and w([bar S]) are the sum of the weights of the vertices in S and [bar S], respectively. The problem of finding a cut whose quotient is minimummore » for a graph has in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,[bar S]) minimizing c(S,[bar S]) subject to the constraint bW [le] w(S) [le] (1 [minus] b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b [le] [1/2]. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao's algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao's most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less
NASA Satellite Monitoring of Water Clarity in Mobile Bay for Nutrient Criteria Development
NASA Technical Reports Server (NTRS)
Blonski, Slawomir; Holekamp, Kara; Spiering, Bruce A.
2009-01-01
This project has demonstrated feasibility of deriving from MODIS daily measurements time series of water clarity parameters that provide coverage of a specific location or an area of interest for 30-50% of days. Time series derived for estuarine and coastal waters display much higher variability than time series of ecological parameters (such as vegetation indices) derived for land areas. (Temporal filtering often applied in terrestrial studies cannot be used effectively in ocean color processing). IOP-based algorithms for retrieval of diffuse light attenuation coefficient and TSS concentration perform well for the Mobile Bay environment: only a minor adjustment was needed in the TSS algorithm, despite generally recognized dependence of such algorithms on local conditions. The current IOP-based algorithm for retrieval of chlorophyll a concentration has not performed as well: a more reliable algorithm is needed that may be based on IOPs at additional wavelengths or on remote sensing reflectance from multiple spectral bands. CDOM algorithm also needs improvement to provide better separation between effects of gilvin (gelbstoff) and detritus. (Identification or development of such algorithm requires more data from in situ measurements of CDOM concentration in Gulf of Mexico coastal waters (ongoing collaboration with the EPA Gulf Ecology Division))
Remote Sensing Reflectance and Inherent Optical Properties in the Mid-mesohaline Chesapeake Bay
NASA Technical Reports Server (NTRS)
Tzortziou, Maria; Subramaniam, Ajit; Herman, Jay R.; Gallegos, Charles L.; Neal, Patrick J.; Harding, Lawrence W., Jr.
2006-01-01
We used an extensive set of bio-optical data and radiative transfer (RT) model simulations of radiation fields to investigate relationships between inherent optical properties and remotely sensed quantities in the optically complex, mid-mesohaline Chesapeake Bay waters. Field observations showed that the chlorophyll algorithms used by the MODIS (MODerate resolution Imaging Spectroradiometer) ocean color sensor (i.e. Chlor_a, chlor_MODIS, chlor_a_3 products) do not perform accurately in these Case 2 waters. This is because, when applied to waters with high concentrations of chlorophyll, all MODIS algorithms are based on empirical relationships between chlorophyll concentration and blue-green wavelength remote sensing reflectance (Rrs) ratios that do not account for the typically strong blue-wavelength absorption by non-covarying, dissolved and non-algal particulate components. Stronger correlation was observed between chlorophyll concentration and Rrs ratios in the red (i.e. Rrs(677)/Rrs(554)) where dissolved and non-algal particulate absorption become exponentially smaller. Regionally-specific algorithms that are based on the phytoplankton optical properties in the red wavelength region provide a better basis for satellite monitoring of phytoplankton blooms in these Case 2 waters. Good optical closure was obtained between independently measured Rrs spectra and the optical properties of backscattering, b(sub b), and absorption, a, over the wide range of in-water conditions observed in the Chesapeake Bay. Observed variability in the quantity f/Q (proportionality factor in the relationship between Rrs and the water inherent optical properties ratio b(sub b)/(a+b(sub b)) was consistent with RT model calculations for the specific measurement geometry and water bio-optical characteristics. Data and model results showed that f/Q values in these Case 2 coastal waters are not considerably different from those estimated in previous studies for Case 1 waters. Variation in surface backscattering significantly affected Rrs magnitude across the visible spectrum and was most strongly correlated (R(sup 2)=0.88) with observed variability in Rrs at 670 nm. Surface values of particulate backscattering were strongly correlated with non-algal particulate absorption, a(sub nap), in the blue wavelengths (R(sup 2)=0.83). These results, along with the measured values of backscattering fraction magnitude and non-algal particulate absorption spectral slope, suggest that suspended non-algal particles with high inorganic content are the major water constituents regulating b(sub b) variability in the mid-mesohaline Chesapeake Bay. Remote retrieval of surface b(sub b) and (a(sub nap), from Rrs(670) can be used in regionally-specific satellite algorithms to separate contribution by non-algal particles and dissolved organic matter to total light absorption in the blue, and monitor non-algal suspended particle concentration and distribution in these Case 2 waters.
NASA Astrophysics Data System (ADS)
Hadi, M. Z.; Djatna, T.; Sugiarto
2018-04-01
This paper develops a dynamic storage assignment model to solve storage assignment problem (SAP) for beverages order picking in a drive-in rack warehousing system to determine the appropriate storage location and space for each beverage products dynamically so that the performance of the system can be improved. This study constructs a graph model to represent drive-in rack storage position then combine association rules mining, class-based storage policies and an arrangement rule algorithm to determine an appropriate storage location and arrangement of the product according to dynamic orders from customers. The performance of the proposed model is measured as rule adjacency accuracy, travel distance (for picking process) and probability a product become expiry using Last Come First Serve (LCFS) queue approach. Finally, the proposed model is implemented through computer simulation and compare the performance for different storage assignment methods as well. The result indicates that the proposed model outperforms other storage assignment methods.
Protein assignments without peak lists using higher-order spectra.
Benison, Gregory; Berkholz, Donald S; Barbar, Elisar
2007-12-01
Despite advances in automating the generation and manipulation of peak lists for assigning biomolecules, there are well-known advantages to working directly with spectra: the eye is still superior to computer algorithms when it comes to picking out peak relationships from contour plots in the presence of confounding factors such as noise, overlap, and spectral artifacts. Here, we present constructs called higher-order spectra for identifying, through direct visual examination, many of the same relationships typically identified by searching peak lists, making them another addition to the set of tools (alongside peak picking and automated assignment) that can be used to solve the assignment problem. The technique is useful for searching for correlated peaks in any spectrum type. Application of this technique to novel, complete sequential assignment of two proteins (AhpFn and IC74(84-143)) is demonstrated. The program "burrow-owl" for the generation and display of higher-order spectra is available at (http://sourceforge.net/projects/burrow-owl) or from the authors.
Upstream capacity upgrade in TDM-PON using RSOA based tunable fiber ring laser.
Yi, Lilin; Li, Zhengxuan; Dong, Yi; Xiao, Shilin; Chen, Jian; Hu, Weisheng
2012-04-23
An upstream multi-wavelength shared (UMWS) time division multiplexing passive optical network (TDM-PON) is presented by using a reflective semiconductor amplifier (RSOA) and tunable optical filter (TOF) based directly modulated fiber ring laser as upstream laser source. The stable laser operation is easily achieved no matter what the bandwidth and shape of the TOF is and it can be directly modulated when the RSOA is driven at its saturation region. In this UMWS TDM-PON system, an individual wavelength can be assigned to the user who has a high bandwidth demand by tuning the central wavelength of the TOF in its upgraded optical network unit (ONU), while others maintain their traditional ONU structure and share the bandwidth via time slots, which greatly and dynamically upgrades the upstream capacity. We experimentally demonstrated the bidirectional transmission of downstream data at 10-Gb/s and upstream data at 1.25-Gb/s per wavelength over 25-km single mode fiber (SMF) with almost no power penalty at both ends. A stable performance is observed for the upstream wavelength tuned from 1530 nm to 1595 nm. Moreover, due to the high extinction ratio (ER) of the upstream signal, the burst-mode transmitting is successfully presented and a better time-division multiplexing performance can be obtained by turning off the unused lasers thanks to the rapid formation of the laser in the fiber ring. © 2012 Optical Society of America
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hegde, Raghurama P.; Fedorov, Alexander A.; Sauder, J. Michael
Single-wavelength anomalous dispersion (SAD) utilizing anomalous signal from native S atoms, or other atoms withZ≤ 20, generally requires highly redundant data collected using relatively long-wavelength X-rays. Here, the results from two proteins are presented where the anomalous signal from serendipitously acquired surface-bound Ca atoms with an anomalous data multiplicity of around 10 was utilized to drivede novostructure determination. In both cases, the Ca atoms were acquired from the crystallization solution, and the data-collection strategy was not optimized to exploit the anomalous signal from these scatterers. The X-ray data were collected at 0.98 Å wavelength in one case and at 1.74more » Å in the other (the wavelength was optimized for sulfur, but the anomalous signal from calcium was exploited for structure solution). Similarly, using a test case, it is shown that data collected at ~1.0 Å wavelength, where thef'' value for sulfur is 0.28 e, are sufficient for structure determination using intrinsic S atoms from a strongly diffracting crystal. Interestingly, it was also observed thatSHELXDwas capable of generating a substructure solution from high-exposure data with a completeness of 70% for low-resolution reflections extending to 3.5 Å resolution with relatively low anomalous multiplicity. Considering the fact that many crystallization conditions contain anomalous scatterers such as Cl, Ca, Mnetc., checking for the presence of fortuitous anomalous signal in data from well diffracting crystals could prove useful in either determining the structurede novoor in accurately assigning surface-bound atoms.« less
Cheng, Shu-Xi; Xie, Chuan-Qi; Wang, Qiao-Nan; He, Yong; Shao, Yong-Ni
2014-05-01
Identification of early blight on tomato leaves by using hyperspectral imaging technique based on different effective wavelengths selection methods (successive projections algorithm, SPA; x-loading weights, x-LW; gram-schmidt orthogonaliza-tion, GSO) was studied in the present paper. Hyperspectral images of seventy healthy and seventy infected tomato leaves were obtained by hyperspectral imaging system across the wavelength range of 380-1023 nm. Reflectance of all pixels in region of interest (ROI) was extracted by ENVI 4. 7 software. Least squares-support vector machine (LS-SVM) model was established based on the full spectral wavelengths. It obtained an excellent result with the highest identification accuracy (100%) in both calibration and prediction sets. Then, EW-LS-SVM and EW-LDA models were established based on the selected wavelengths suggested by SPA, x-LW and GSO, respectively. The results showed that all of the EW-LS-SVM and EW-LDA models performed well with the identification accuracy of 100% in EW-LS-SVM model and 100%, 100% and 97. 83% in EW-LDA model, respectively. Moreover, the number of input wavelengths of SPA-LS-SVM, x-LW-LS-SVM and GSO-LS-SVM models were four (492, 550, 633 and 680 nm), three (631, 719 and 747 nm) and two (533 and 657 nm), respectively. Fewer input variables were beneficial for the development of identification instrument. It demonstrated that it is feasible to identify early blight on tomato leaves by using hyperspectral imaging, and SPA, x-LW and GSO were effective wavelengths selection methods.
Fluorescence lifetime measurements in heterogeneous scattering medium
NASA Astrophysics Data System (ADS)
Nishimura, Goro; Awasthi, Kamlesh; Furukawa, Daisuke
2016-07-01
Fluorescence lifetime in heterogeneous multiple light scattering systems is analyzed by an algorithm without solving the diffusion or radiative transfer equations. The algorithm assumes that the optical properties of medium are constant in the excitation and emission wavelength regions. If the assumption is correct and the fluorophore is a single species, the fluorescence lifetime can be determined by a set of measurements of temporal point-spread function of the excitation light and fluorescence at two different concentrations of the fluorophore. This method is not dependent on the heterogeneity of the optical properties of the medium as well as the geometry of the excitation-detection on an arbitrary shape of the sample. The algorithm was validated by an indocyanine green fluorescence in phantom measurements and demonstrated by an in vivo measurement.
NASA Astrophysics Data System (ADS)
Park, Won-Kwang; Kim, Hwa Pyung; Lee, Kwang-Jae; Son, Seong-Ho
2017-11-01
Motivated by the biomedical engineering used in early-stage breast cancer detection, we investigated the use of MUltiple SIgnal Classification (MUSIC) algorithm for location searching of small anomalies using S-parameters. We considered the application of MUSIC to functional imaging where a small number of dipole antennas are used. Our approach is based on the application of Born approximation or physical factorization. We analyzed cases in which the anomaly is respectively small and large in relation to the wavelength, and the structure of the left-singular vectors is linked to the nonzero singular values of a Multi-Static Response (MSR) matrix whose elements are the S-parameters. Using simulations, we demonstrated the strengths and weaknesses of the MUSIC algorithm in detecting both small and extended anomalies.
Calculation and measurement of radiation corrections for plasmon resonances in nanoparticles
NASA Astrophysics Data System (ADS)
Hung, L.; Lee, S. Y.; McGovern, O.; Rabin, O.; Mayergoyz, I.
2013-08-01
The problem of plasmon resonances in metallic nanoparticles can be formulated as an eigenvalue problem under the condition that the wavelengths of the incident radiation are much larger than the particle dimensions. As the nanoparticle size increases, the quasistatic condition is no longer valid. For this reason, the accuracy of the electrostatic approximation may be compromised and appropriate radiation corrections for the calculation of resonance permittivities and resonance wavelengths are needed. In this paper, we present the radiation corrections in the framework of the eigenvalue method for plasmon mode analysis and demonstrate that the computational results accurately match analytical solutions (for nanospheres) and experimental data (for nanorings and nanocubes). We also demonstrate that the optical spectra of silver nanocube suspensions can be fully assigned to dipole-type resonance modes when radiation corrections are introduced. Finally, our method is used to predict the resonance wavelengths for face-to-face silver nanocube dimers on glass substrates. These results may be useful for the indirect measurements of the gaps in the dimers from extinction cross-section observations.
Assessment of diverse algorithms applied on MODIS Aqua and Terra data over land surfaces in Europe
NASA Astrophysics Data System (ADS)
Glantz, P.; Tesche, M.
2012-04-01
Beside an increase of greenhouse gases (e.g., carbon dioxide, methane and nitrous oxide) human activities (for instance fossil fuel and biomass burning) have lead to perturbation of the atmospheric content of aerosol particles. Aerosols exhibits high spatial and temporal variability in the atmosphere. Therefore, aerosol investigation for climate research and environmental control require the identification of source regions, their strength and aerosol type, which can be retrieved based on space-borne observations. The aim of the present study is to validate and evaluate AOT (aerosol optical thickness) and Ångström exponent, obtained with the SAER (Satellite AErosol Retrieval) algorithm for MODIS (MODerate resolution Imaging Spectroradiometer) Aqua and Terra calibrated level 1 data (1 km horizontal resolution at ground), against AERONET (AErosol RObotic NETwork) observations and MODIS Collection 5 (c005) standard product retrievals (10 km), respectively, over land surfaces in Europe for the seasons; early spring (period 1), mid spring (period 2) and summer (period 3). For several of the cases analyzed here the Aqua and Terra satellites passed the investigation area twice during a day. Thus, beside a variation in the sun elevation the satellite aerosol retrievals have also on a daily basis been performed with a significant variation in the satellite-viewing geometry. An inter-comparison of the two algorithms has also been performed. The validation with AERONET shows that the MODIS c005 retrieved AOT is, for the wavelengths 0.469 and 0.500 nm, on the whole within the expected uncertainty for one standard deviation of the MODIS retrievals over Europe (Δτ = ±0.05 ± 0.15τ). The SAER estimated AOT for the wavelength 0.443 nm also agree reasonable well with AERONET. Thus, the majority of the SAER AOT values are within the MODIS expected uncertainty range, although somewhat larger RMSD (root mean square deviation) occurs compared to the results obtained with the MODIS c005 algorithm. The discrepancy between SAERand AERONET AOT is, however, substantially larger for the wavelength 488 nm, which means that several of the AOT values are without the MODIS expected uncertainty range. Both algorithms are unable to estimate Ångström exponent accurately, although the MODIS c005 algorithm performs a better job. Based on the inter-comparison of the SAER and MODIS c005 algorithms it was found here that the former estimation of AOT is for values up to 1on the whole within the expected uncertainties for one standard deviation of the MODIS retrievals, considering both Aqua and Terra and periods 1 and 3. The latter also occurs for Aqua and period 2, while then for AOT values lower than 0.5. The present algorithms were, beside aerosols emitted from clean sources and continental sources in Europe, also applied with favor on aerosol particles transported from agricultural fires in Russia and Ukraine. The latter events were associated with high aerosol loadings, although probably with similar single scattering albedo as the days classified as clean. We also present observations performed with space borne and ground-based lidars in the area investigated. From the latter platforms the vertical distribution of aerosol extinction in the atmosphere can be measured. This study suggests that the present satellite retrievals of AOT, particularly obtained with the MODIS c005 algorithm, will, in combination with the lidar measurements, be very useful in validation of regional and climate models over Europe.
Biological Applications and Effects of Optical Masers.
1984-06-01
term ocular effects of optical radiation on aging and macular degeneration is discussed and a final draft of the report of the Working Group assessing...exposure to short wavelength light on aging and degeneration of the retina and lens leading to degenerative maculopathies and senile cataract. Dr. Ham...chaired the Working Group assigned the task of assessing light damage to the RPE and its possible relationship to aging and macular degeneration of the