Design and realization of the baseband processor in satellite navigation and positioning receiver
NASA Astrophysics Data System (ADS)
Zhang, Dawei; Hu, Xiulin; Li, Chen
2007-11-01
The content of this paper is focused on the Design and realization of the baseband processor in satellite navigation and positioning receiver. Baseband processor is the most important part of the satellite positioning receiver. The design covers baseband processor's main functions include multi-channel digital signal DDC, acquisition, code tracking, carrier tracking, demodulation, etc. The realization is based on an Altera's FPGA device, that makes the system can be improved and upgraded without modifying the hardware. It embodies the theory of software defined radio (SDR), and puts the theory of the spread spectrum into practice. This paper puts emphasis on the realization of baseband processor in FPGA. In the order of choosing chips, design entry, debugging and synthesis, the flow is presented detailedly. Additionally the paper detailed realization of Digital PLL in order to explain a method of reducing the consumption of FPGA. Finally, the paper presents the result of Synthesis. This design has been used in BD-1, BD-2 and GPS.
Electronics for CMS Endcap Muon Level-1 Trigger System Phase-1 and HL LHC upgrades
NASA Astrophysics Data System (ADS)
Madorsky, A.
2017-07-01
To accommodate high-luminosity LHC operation at a 13 TeV collision energy, the CMS Endcap Muon Level-1 Trigger system had to be significantly modified. To provide robust track reconstruction, the trigger system must now import all available trigger primitives generated by the Cathode Strip Chambers and by certain other subsystems, such as Resistive Plate Chambers (RPC). In addition to massive input bandwidth, this also required significant increase in logic and memory resources. To satisfy these requirements, a new Sector Processor unit has been designed. It consists of three modules. The Core Logic module houses the large FPGA that contains the track-finding logic and multi-gigabit serial links for data exchange. The Optical module contains optical receivers and transmitters; it communicates with the Core Logic module via a custom backplane section. The Pt Lookup table (PTLUT) module contains 1 GB of low-latency memory that is used to assign the final Pt to reconstructed muon tracks. The μ TCA architecture (adopted by CMS) was used for this design. The talk presents the details of the hardware and firmware design of the production system based on Xilinx Virtex-7 FPGA family. The next round of LHC and CMS upgrades starts in 2019, followed by a major High-Luminosity (HL) LHC upgrade starting in 2024. In the course of these upgrades, new Gas Electron Multiplier (GEM) detectors and more RPC chambers will be added to the Endcap Muon system. In order to keep up with all these changes, a new Advanced Processor unit is being designed. This device will be based on Xilinx UltraScale+ FPGAs. It will be able to accommodate up to 100 serial links with bit rates of up to 25 Gb/s, and provide up to 2.5 times more logic resources than the device used currently. The amount of PTLUT memory will be significantly increased to provide more flexibility for the Pt assignment algorithm. The talk presents preliminary details of the hardware design program.
Missile signal processing common computer architecture for rapid technology upgrade
NASA Astrophysics Data System (ADS)
Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul
2004-10-01
Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.
Searching for New Physics with Top Quarks and Upgrade to the Muon Spectrometer at ATLAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwarz, Thomas Andrew
2015-06-29
Over the funding period of this award, my research has focused on searching for new physics with top quarks and in the Higgs sector. The highly energetic top quark events at the LHC are an excellent venue to search for new physics, as well as make standard model measurements. Further, the recent discovery of the Higgs boson motivates searching for new physics that could be associated with it. This one-year award has facilitated the beginning of my research program, which has resulted in four publications, several conference talks, and multiple leadership positions within physics groups. Additionally, we are contributing tomore » ATLAS upgrades and operations. As part of the Phase I upgrade, I have taken on the responsibility of the design, prototyping, and quality control of a signal packet router for the trigger electronics of the New Small Wheel. This is a critical component of the upgrade, as the router is the main switchboard for all trigger signals to track finding processors. I am also leading the Phase II upgrade of the readout electronics of the muon spectrometer, and have been selected as the USATLAS Level-2 manager of the Phase II upgrade of the muon spectrometer. The award has been critical in these contributions to the experiment.« less
Clinical Validation of a Sound Processor Upgrade in Direct Acoustic Cochlear Implant Subjects
Kludt, Eugen; D’hondt, Christiane; Lenarz, Thomas; Maier, Hannes
2017-01-01
Objective: The objectives of the investigation were to evaluate the effect of a sound processor upgrade on the speech reception threshold in noise and to collect long-term safety and efficacy data after 2½ to 5 years of device use of direct acoustic cochlear implant (DACI) recipients. Study Design: The study was designed as a mono-centric, prospective clinical trial. Setting: Tertiary referral center. Patients: Fifteen patients implanted with a direct acoustic cochlear implant. Intervention: Upgrade with a newer generation of sound processor. Main Outcome Measures: Speech recognition test in quiet and in noise, pure tone thresholds, subject-reported outcome measures. Results: The speech recognition in quiet and in noise is superior after the sound processor upgrade and stable after long-term use of the direct acoustic cochlear implant. The bone conduction thresholds did not decrease significantly after long-term high level stimulation. Conclusions: The new sound processor for the DACI system provides significant benefits for DACI users for speech recognition in both quiet and noise. Especially the noise program with the use of directional microphones (Zoom) allows DACI patients to have much less difficulty when having conversations in noisy environments. Furthermore, the study confirms that the benefits of the sound processor upgrade are available to the DACI recipients even after several years of experience with a legacy sound processor. Finally, our study demonstrates that the DACI system is a safe and effective long-term therapy. PMID:28406848
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava
2017-01-01
For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particlemore » tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.« less
Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Processors and GPUs
NASA Astrophysics Data System (ADS)
Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; Masciovecchio, Mario; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2017-08-01
For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers have been pushed into producing lower-power, multi-core processors such as Graphical Processing Units (GPU), ARM CPUs, and Intel MICs. Broad-based efforts from manufacturers and developers have been devoted to making these processors user-friendly enough to perform general computations. However, extracting performance from a larger number of cores, as well as specialized vector or SIMD units, requires special care in algorithm design and code optimization. One of the most computationally challenging problems in high-energy particle experiments is finding and fitting the charged-particle tracks during event reconstruction. This is expected to become by far the dominant problem at the High-Luminosity Large Hadron Collider (HL-LHC), for example. Today the most common track finding methods are those based on the Kalman filter. Experience with Kalman techniques on real tracking detector systems has shown that they are robust and provide high physics performance. This is why they are currently in use at the LHC, both in the trigger and offine. Previously we reported on the significant parallel speedups that resulted from our investigations to adapt Kalman filters to track fitting and track building on Intel Xeon and Xeon Phi. Here, we discuss our progresses toward the understanding of these processors and the new developments to port the Kalman filter to NVIDIA GPUs.
NASA Astrophysics Data System (ADS)
Argemí, O.; Bech, J.; Pineda, N.; Rigo, T.
2009-09-01
Remote sensing observing systems of the Meteorological Service of Catalonia (SMC) have been upgraded during the last years with newer technologies and enhancements. Recent changes on the weather radar network have been motivated to improve precipitation estimates by radar as well as meteorological surveillance in the area of Catalonia. This region has approximately 32,000 square kilometres and is located in the NE of Spain, limited by the Pyrenees to the North (with mountains exceeding 3000 m) and by the Mediterranean Sea to the East and South. In the case of the total lightning (intra-cloud and cloud-to-ground lightning) detection system, the current upgrades will assure a better lightning detection efficiency and location accuracy. Both upgraded systems help to enhance the tracking and the study of thunderstorm events. Initially, the weather radar network was designed to cover the complex topography of Catalonia and surrounding areas to support the regional administration, which includes civil protection and water authorities. The weather radar network was upgraded in 2008 with the addition of a new C-band Doppler radar system, which is located in the top of La Miranda Mountain (Tivissa) in the southern part of Catalonia enhancing the coverage, particularly to the South and South-West. Technically the new radar is very similar to the last one installed in 2003 (Creu del Vent radar), using a 4 m antenna (i.e., 1 degree beam width), a Vaisala-Sigmet RVP-8 digital receiver and processor and a low power transmitter using a Travelling Wave Tube (TWT) amplifier. This design allows using pulse-compression techniques to enhance radial resolution and sensitivity. Currently, the SMC is upgrading its total lightning detection system, operational since 2003. While a fourth sensor (Amposta) was added last year to enlarge the system coverage, all sensors and central processor will be upgraded this year to the new Vaisala’s total lightning location technology. The new LS8000 sensor configuration integrates two lightning detection technologies: VHF interferometry technology provides high performance in detection of cloud lightning, while LF combined magnetic direction finding and time-of-arrival technology offers a highest detection efficiency and accurate location for cloud-to-ground lightning strokes. The presentation describes in some detail all this innovation in remote sensing observing networks and also reports some examples over Catalonia which is frequently affected by different types of convective events, including severe weather (large hail, tornadic events, etc.) and heavy rainfall episodes.
New tracking implementation in the Deep Space Network
NASA Technical Reports Server (NTRS)
Berner, Jeff B.; Bryant, Scott H.
2001-01-01
As part of the Network Simplification Project, the tracking system of the Deep Space Network is being upgraded. This upgrade replaces the discrete logic sequential ranging system with a system that is based on commercial Digital Signal Processor boards. The new implementation allows both sequential and pseudo-noise types of ranging. The other major change is a modernization of the data formatting. Previously, there were several types of interfaces, delivering both intermediate data and processed data (called 'observables'). All of these interfaces were bit-packed blocks, which do not allow for easy expansion, and many of these interfaces required knowledge of the specific hardware implementations. The new interface supports four classes of data: raw (direct from the measuring equipment), derived (the observable data), interferometric (multiple antenna measurements), and filtered (data whose values depend on multiple measurements). All of the measurements are reported at the sky frequency or phase level, so that no knowledge of the actual hardware is required. The data is formatted into Standard Formatted Data Units, as defined by the Consultative Committee for Space Data Systems, so that expansion and cross-center usage is greatly enhanced.
Multi-Threaded Algorithms for GPGPU in the ATLAS High Level Trigger
NASA Astrophysics Data System (ADS)
Conde Muíño, P.; ATLAS Collaboration
2017-10-01
General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant challenge that will increase significantly with future LHC upgrades. During the LHC data taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further to 7.5 times the design value in 2026 following LHC and ATLAS upgrades. Corresponding improvements in the speed of the reconstruction code will be needed to provide the required trigger selection power within affordable computing resources. Key factors determining the potential benefit of including GPGPU as part of the HLT processor farm are: the relative speed of the CPU and GPGPU algorithm implementations; the relative execution times of the GPGPU algorithms and serial code remaining on the CPU; the number of GPGPU required, and the relative financial cost of the selected GPGPU. We give a brief overview of the algorithms implemented and present new measurements that compare the performance of various configurations exploiting GPGPU cards.
Kalman Filter Tracking on Parallel Architectures
NASA Astrophysics Data System (ADS)
Cerati, Giuseppe; Elmer, Peter; Krutelyov, Slava; Lantz, Steven; Lefebvre, Matthieu; McDermott, Kevin; Riley, Daniel; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2016-11-01
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve the theoretical performance gains of these processors, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High-Luminosity Large Hadron Collider (HL-LHC), for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques such as Cellular Automata or Hough Transforms. The most common track finding techniques in use today, however, are those based on a Kalman filter approach. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust, and are in use today at the LHC. Given the utility of the Kalman filter in track finding, we have begun to port these algorithms to parallel architectures, namely Intel Xeon and Xeon Phi. We report here on our progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a simplified experimental environment.
LHCb Kalman Filter cross architecture studies
NASA Astrophysics Data System (ADS)
Cámpora Pérez, Daniel Hugo
2017-10-01
The 2020 upgrade of the LHCb detector will vastly increase the rate of collisions the Online system needs to process in software, in order to filter events in real time. 30 million collisions per second will pass through a selection chain, where each step is executed conditional to its prior acceptance. The Kalman Filter is a fit applied to all reconstructed tracks which, due to its time characteristics and early execution in the selection chain, consumes 40% of the whole reconstruction time in the current trigger software. This makes the Kalman Filter a time-critical component as the LHCb trigger evolves into a full software trigger in the Upgrade. I present a new Kalman Filter algorithm for LHCb that can efficiently make use of any kind of SIMD processor, and its design is explained in depth. Performance benchmarks are compared between a variety of hardware architectures, including x86_64 and Power8, and the Intel Xeon Phi accelerator, and the suitability of said architectures to efficiently perform the LHCb Reconstruction process is determined.
[Improving speech comprehension using a new cochlear implant speech processor].
Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A
2009-06-01
The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg sentences in the clinical setting S(0)N(CI), with speech signal at 0 degrees and noise lateral to the CI at 90 degrees . With the convincing findings from our evaluations of this multicenter study cohort, a trial with the Freedom 24 sound processor for all suitable CI users is recommended. For evaluating the benefits of a new processor, the comparative assessment paradigm used in our study design would be considered ideal for use with individual patients.
ERIC Educational Resources Information Center
Perez, Ernest
1997-01-01
Examines the practical realities of upgrading Intel personal computers in libraries, considering budgets and technical personnel availability. Highlights include adding RAM; putting in faster processor chips, including clock multipliers; new hard disks; CD-ROM speed; motherboards and interface cards; cost limits and economic factors; and…
The Software Correlator of the Chinese VLBI Network
NASA Technical Reports Server (NTRS)
Zheng, Weimin; Quan, Ying; Shu, Fengchun; Chen, Zhong; Chen, Shanshan; Wang, Weihua; Wang, Guangli
2010-01-01
The software correlator of the Chinese VLBI Network (CVN) has played an irreplaceable role in the CVN routine data processing, e.g., in the Chinese lunar exploration project. This correlator will be upgraded to process geodetic and astronomical observation data. In the future, with several new stations joining the network, CVN will carry out crustal movement observations, quick UT1 measurements, astrophysical observations, and deep space exploration activities. For the geodetic or astronomical observations, we need a wide-band 10-station correlator. For spacecraft tracking, a realtime and highly reliable correlator is essential. To meet the scientific and navigation requirements of CVN, two parallel software correlators in the multiprocessor environments are under development. A high speed, 10-station prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm on a computer cluster platform is being developed. Another real-time software correlator for spacecraft tracking adopts the thread-parallel technology, and it runs on the SMP (Symmetric Multiple Processor) servers. Both correlators have the characteristic of flexible structure and scalability.
Sobol, Wlad T
2002-01-01
A simple kinetic model that describes the time evolution of the chemical concentration of an arbitrary compound within the tank of an automatic film processor is presented. It provides insights into the kinetics of chemistry concentration inside the processor's tank; the results facilitate the tasks of processor tuning and quality control (QC). The model has successfully been used in several troubleshooting sessions of low-volume mammography processors for which maintaining consistent QC tracking was difficult due to fluctuations of bromide levels in the developer tank.
Hey, Matthias; Hocke, Thomas; Mauger, Stefan; Müller-Deile, Joachim
2016-11-01
Individual speech intelligibility was measured in quiet and noise for cochlear Implant recipients upgrading from the Freedom to the CP900 series sound processor. The postlingually deafened participants (n = 23) used either Nucleus CI24RE or CI512 cochlear implant, and currently wore a Freedom sound processor. A significant group mean improvement in speech intelligibility was found in quiet (Freiburg monosyllabic words at 50 dB SPL ) and in noise (adaptive Oldenburger sentences in noise) for the two CP900 series SmartSound programs compared to the Freedom program. Further analysis was carried out on individual's speech intelligibility outcomes in quiet and in noise. Results showed a significant improvement or decrement for some recipients when upgrading to the new programs. To further increase speech intelligibility outcomes when upgrading, an enhanced upgrade procedure is proposed that includes additional testing with different signal-processing schemes. Implications of this research are that future automated scene analysis and switching technologies could provide additional performance improvements by introducing individualized scene-dependent settings.
NASA Astrophysics Data System (ADS)
Feng, Bing
Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac (ERL) due to extremely small emittance and high peak currents anticipated in the machine. A tune shift is discovered from the simulation; however, emittance growth of the electron beam in electron cloud is not observed for ERL parameters.
A VME-based software trigger system using UNIX processors
NASA Astrophysics Data System (ADS)
Atmur, Robert; Connor, David F.; Molzon, William
1997-02-01
We have constructed a distributed computing platform with eight processors to assemble and filter data from digitization crates. The filtered data were transported to a tape-writing UNIX computer via ethernet. Each processor ran a UNIX operating system and was installed in its own VME crate. Each VME crate contained dual-port memories which interfaced with the digitizers. Using standard hardware and software (VME and UNIX) allows us to select from a wide variety of non-proprietary products and makes upgrades simpler, if they are necessary.
A hardware fast tracker for the ATLAS trigger
NASA Astrophysics Data System (ADS)
Asbah, Nedaa
2016-09-01
The trigger system of the ATLAS experiment is designed to reduce the event rate from the LHC nominal bunch crossing at 40 MHz to about 1 kHz, at the design luminosity of 1034 cm-2 s-1. After a successful period of data taking from 2010 to early 2013, the LHC already started with much higher instantaneous luminosity. This will increase the load on High Level Trigger system, the second stage of the selection based on software algorithms. More sophisticated algorithms will be needed to achieve higher background rejection while maintaining good efficiency for interesting physics signals. The Fast TracKer (FTK) is part of the ATLAS trigger upgrade project. It is a hardware processor that will provide, at every Level-1 accepted event (100 kHz) and within 100 microseconds, full tracking information for tracks with momentum as low as 1 GeV. Providing fast, extensive access to tracking information, with resolution comparable to the offline reconstruction, FTK will help in precise detection of the primary and secondary vertices to ensure robust selections and improve the trigger performance. FTK exploits hardware technologies with massive parallelism, combining Associative Memory ASICs, FPGAs and high-speed communication links.
Mountainous Coasts: A change to the GFS post codes will remove a persistent, spurious high pressure system ENVIRONMENTAL PREDICTION /NCEP/ WILL UPGRADE THE GFS POST PROCESSOR. THE PRIMARY EFFORT BEHIND THIS UPGRADE WILL BE TO UNIFY THE POST PROCESSING CODE FOR THE NORTH AMERICAN MESO SCALE /NAM/ MODEL AND THE GFS INTO
An "artificial retina" processor for track reconstruction at the full LHC crossing rate
NASA Astrophysics Data System (ADS)
Abba, A.; Bedeschi, F.; Caponio, F.; Cenci, R.; Citterio, M.; Cusimano, A.; Fu, J.; Geraci, A.; Grizzuti, M.; Lusardi, N.; Marino, P.; Morello, M. J.; Neri, N.; Ninci, D.; Petruzzo, M.; Piucci, A.; Punzi, G.; Ristori, L.; Spinella, F.; Stracka, S.; Tonelli, D.; Walsh, J.
2016-07-01
We present the latest results of an R&D study for a specialized processor capable of reconstructing, in a silicon pixel detector, high-quality tracks from high-energy collision events at 40 MHz. The processor applies a highly parallel pattern-recognition algorithm inspired to quick detection of edges in mammals visual cortex. After a detailed study of a real-detector application, demonstrating that online reconstruction of offline-quality tracks is feasible at 40 MHz with sub-microsecond latency, we are implementing a prototype using common high-bandwidth FPGA devices.
An "artificial retina" processor for track reconstruction at the full LHC crossing rate
Abba, A.; F. Bedeschi; Caponio, F.; ...
2015-10-23
Here, we present the latest results of an R&D; study for a specialized processor capable of reconstructing, in a silicon pixel detector, high-quality tracks from high-energy collision events at 40 MHz. The processor applies a highly parallel pattern-recognition algorithm inspired to quick detection of edges in mammals visual cortex. After a detailed study of a real-detector application, demonstrating that online reconstruction of offline-quality tracks is feasible at 40 MHz with sub-microsecond latency, we are implementing a prototype using common high-bandwidth FPGA devices.
Dazert, Stefan; Thomas, Jan Peter; Büchner, Andreas; Müller, Joachim; Hempel, John Martin; Löwenheim, Hubert; Mlynski, Robert
2017-03-01
The RONDO is a single-unit cochlear implant audio processor, which omits the need for a behind-the-ear (BTE) audio processor. The primary aim was to compare speech perception results in quiet and in noise with the RONDO and the OPUS 2, a BTE audio processor. Secondary aims were to determine subjects' self-assessed levels of sound quality and gather subjective feedback on RONDO use. All speech perception tests were performed with the RONDO and the OPUS 2 behind-the-ear audio processor at 3 test intervals. Subjects were required to use the RONDO between test intervals. Subjects were tested at upgrade from the OPUS 2 to the RONDO and at 1 and 6 months after upgrade. Speech perception was determined using the Freiburg Monosyllables in quiet test and the Oldenburg Sentence Test (OLSA) in noise. Subjective perception was determined using the Hearing Implant Sound Quality Index (HISQUI 19 ), and a RONDO device-specific questionnaire. 50 subjects participated in the study. Neither speech perception scores nor self-perceived sound quality scores were significantly different at any interval between the RONDO and the OPUS 2. Subjects reported high levels of satisfaction with the RONDO. The RONDO provides comparable speech perception to the OPUS 2 while providing users with high levels of satisfaction and comfort without increasing health risk. The RONDO is a suitable and safe alternative to traditional BTE audio processors.
Multisensor data fusion for integrated maritime surveillance
NASA Astrophysics Data System (ADS)
Premji, A.; Ponsford, A. M.
1995-01-01
A prototype Integrated Coastal Surveillance system has been developed on Canada's East Coast to provide effective surveillance out to and beyond the 200 nautical mile Exclusive Economic Zone. The system has been designed to protect Canada's natural resources, and to monitor and control the coastline for smuggling, drug trafficking, and similar illegal activity. This paper describes the Multiple Sensor - Multiple Target data fusion system that has been developed. The fusion processor has been developed around the celebrated Multiple Hypothesis Tracking algorithm which accommodates multiple targets, new targets, false alarms, and missed detections. This processor performs four major functions: plot-to-track association to form individual radar tracks; fusion of radar tracks with secondary sensor reports; track identification and tagging using secondary reports; and track level fusion to form common tracks. Radar data from coherent and non-coherent radars has been used to evaluate the performance of the processor. This paper presents preliminary results.
Airborne optical tracking control system design study
NASA Astrophysics Data System (ADS)
1992-09-01
The Kestrel LOS Tracking Program involves the development of a computer and algorithms for use in passive tracking of airborne targets from a high altitude balloon platform. The computer receivers track error signals from a video tracker connected to one of the imaging sensors. In addition, an on-board IRU (gyro), accelerometers, a magnetometer, and a two-axis inclinometer provide inputs which are used for initial acquisitions and course and fine tracking. Signals received by the control processor from the video tracker, IRU, accelerometers, magnetometer, and inclinometer are utilized by the control processor to generate drive signals for the payload azimuth drive, the Gimballed Mirror System (GMS), and the Fast Steering Mirror (FSM). The hardware which will be procured under the LOS tracking activity is the Controls Processor (CP), the IRU, and the FSM. The performance specifications for the GMS and the payload canister azimuth driver are established by the LOS tracking design team in an effort to achieve a tracking jitter of less than 3 micro-rad, 1 sigma for one axis.
78 FR 41116 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-09
... Agreement State regulations. All generators, collectors, and processors of low-level waste intended for... which facilitates tracking the identity of the waste generator. That tracking becomes more complicated... waste shipped from a waste processor may contain waste from several different generators. The...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-12
... Regulations (10 CFR) or equivalent Agreement State regulations. All generators, collectors, and processors of... which facilitates tracking the identity of the waste generator. That tracking becomes more complicated... waste shipped from a waste processor may contain waste from several different generators. The...
Set processing in a network environment. [data bases and magnetic disks and tapes
NASA Technical Reports Server (NTRS)
Hardgrave, W. T.
1975-01-01
A combination of a local network, a mass storage system, and an autonomous set processor serving as a data/storage management machine is described. Its characteristics include: content-accessible data bases usable from all connected devices; efficient storage/access of large data bases; simple and direct programming with data manipulation and storage management handled by the set processor; simple data base design and entry from source representation to set processor representation with no predefinition necessary; capability available for user sort/order specification; significant reduction in tape/disk pack storage and mounts; flexible environment that allows upgrading hardware/software configuration without causing major interruptions in service; minimal traffic on data communications network; and improved central memory usage on large processors.
Magalhães, Ana Tereza de Matos; Goffi-Gomez, M Valéria Schmidt; Hoshino, Ana Cristina; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; Brito, Rubens
2013-09-01
To identify the technological contributions of the newer version of speech processor to the first generation of multichannel cochlear implant and the satisfaction of users of the new technology. Among the new features available, we focused on the effect of the frequency allocation table, the T-SPL and C-SPL, and the preprocessing gain adjustments (adaptive dynamic range optimization). Prospective exploratory study. Cochlear implant center at hospital. Cochlear implant users of the Spectra processor with speech recognition in closed set. Seventeen patients were selected between the ages of 15 and 82 and deployed for more than 8 years. The technology update of the speech processor for the Nucleus 22. To determine Freedom's contribution, thresholds and speech perception tests were performed with the last map used with the Spectra and the maps created for Freedom. To identify the effect of the frequency allocation table, both upgraded and converted maps were programmed. One map was programmed with 25 dB T-SPL and 65 dB C-SPL and the other map with adaptive dynamic range optimization. To assess satisfaction, SADL and APHAB were used. All speech perception tests and all sound field thresholds were statistically better with the new speech processor; 64.7% of patients preferred maintaining the same frequency table that was suggested for the older processor. The sound field threshold was statistically significant at 500, 1,000, 1,500, and 2,000 Hz with 25 dB T-SPL/65 dB C-SPL. Regarding patient's satisfaction, there was a statistically significant improvement, only in the subscale of speech in noise abilities and phone use. The new technology improved the performance of patients with the first generation of multichannel cochlear implant.
Test beam performance measurements for the Phase I upgrade of the CMS pixel detector
NASA Astrophysics Data System (ADS)
Dragicevic, M.; Friedl, M.; Hrubec, J.; Steininger, H.; Gädda, A.; Härkönen, J.; Lampén, T.; Luukka, P.; Peltola, T.; Tuominen, E.; Tuovinen, E.; Winkler, A.; Eerola, P.; Tuuva, T.; Baulieu, G.; Boudoul, G.; Caponetto, L.; Combaret, C.; Contardo, D.; Dupasquier, T.; Gallbit, G.; Lumb, N.; Mirabito, L.; Perries, S.; Vander Donckt, M.; Viret, S.; Bonnin, C.; Charles, L.; Gross, L.; Hosselet, J.; Tromson, D.; Feld, L.; Karpinski, W.; Klein, K.; Lipinski, M.; Pierschel, G.; Preuten, M.; Rauch, M.; Wlochal, M.; Aldaya, M.; Asawatangtrakuldee, C.; Beernaert, K.; Bertsche, D.; Contreras-Campana, C.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Gallo, E.; Garay Garcia, J.; Hansen, K.; Haranko, M.; Harb, A.; Hauk, J.; Keaveney, J.; Kalogeropoulos, A.; Kleinwort, C.; Lohmann, W.; Mankel, R.; Maser, H.; Mittag, G.; Muhl, C.; Mussgiller, A.; Pitzl, D.; Reichelt, O.; Savitskyi, M.; Schütze, P.; Sola, V.; Spannagel, S.; Walsh, R.; Zuber, A.; Biskop, H.; Buhmann, P.; Centis-Vignali, M.; Garutti, E.; Haller, J.; Hoffmann, M.; Klanner, R.; Lapsien, T.; Matysek, M.; Perieanu, A.; Scharf, Ch.; Schleper, P.; Schmidt, A.; Schwandt, J.; Sonneveld, J.; Steinbrück, G.; Vormwald, B.; Wellhausen, J.; Abbas, M.; Amstutz, C.; Barvich, T.; Barth, Ch.; Boegelspacher, F.; De Boer, W.; Butz, E.; Casele, M.; Colombo, F.; Dierlamm, A.; Freund, B.; Hartmann, F.; Heindl, S.; Husemann, U.; Kornmeyer, A.; Kudella, S.; Muller, Th.; Simonis, H. J.; Steck, P.; Weber, M.; Weiler, Th.; Kiss, T.; Siklér, F.; Tölyhi, T.; Veszprémi, V.; Cariola, P.; Creanza, D.; De Palma, M.; De Robertis, G.; Fiore, L.; Franco, M.; Loddo, F.; Sala, G.; Silvestris, L.; Maggi, G.; My, S.; Selvaggi, G.; Albergo, S.; Cappello, G.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Saizu, M. A.; Tricomi, A.; Tuve, C.; Focardi, E.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Malvezzi, S.; Manzoni, R. A.; Menasce, D.; Moroni, L.; Pedrini, D.; Azzi, P.; Bacchetta, N.; Bisello, D.; Dall'Osso, M.; Pozzobon, N.; Tosi, M.; Alunni Solestizi, L.; Biasini, M.; Bilei, G. M.; Cecchi, C.; Checcucci, B.; Ciangottini, D.; Fanò, L.; Gentsos, C.; Ionica, M.; Leonardi, R.; Manoni, E.; Mantovani, G.; Marconi, S.; Mariani, V.; Menichelli, M.; Modak, A.; Morozzi, A.; Moscatelli, F.; Passeri, D.; Placidi, P.; Postolache, V.; Rossi, A.; Saha, A.; Santocchia, A.; Storchi, L.; Spiga, D.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Basti, A.; Boccali, T.; Borrello, L.; Bosi, F.; Castaldi, R.; Ceccanti, M.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fedi, G.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Magazzu, G.; Mammini, P.; Mariani, F.; Mazzoni, E.; Messineo, A.; Moggi, A.; Morsani, F.; Palla, F.; Palmonari, F.; Profeti, A.; Raffaelli, F.; Ragonesi, A.; Rizzi, A.; Soldani, A.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Abbaneo, D.; Ahmed, I.; Albert, E.; Auzinger, G.; Berruti, G.; Bonnaud, J.; Daguin, J.; D'Auria, A.; Detraz, S.; Dondelewski, O.; Engegaard, B.; Faccio, F.; Frank, N.; Gill, K.; Honma, A.; Kornmayer, A.; Labaza, A.; Manolescu, F.; McGill, I.; Mersi, S.; Michelis, S.; Onnela, A.; Ostrega, M.; Pavis, S.; Peisert, A.; Pernot, J.-F.; Petagna, P.; Postema, H.; Rapacz, K.; Sigaud, C.; Tropea, P.; Troska, J.; Tsirou, A.; Vasey, F.; Verlaat, B.; Vichoudis, P.; Zwalinski, L.; Bachmair, F.; Becker, R.; di Calafiori, D.; Casal, B.; Berger, P.; Djambazov, L.; Donega, M.; Grab, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lustermann, W.; Mangano, B.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meinhard, M.; Perozzi, L.; Roeser, U.; Starodumov, A.; Tavolaro, V.; Wallny, R.; Zhu, D.; Amsler, C.; Bösiger, K.; Caminada, L.; Canelli, F.; Chiochia, V.; de Cosa, A.; Galloni, C.; Hreus, T.; Kilminster, B.; Lange, C.; Maier, R.; Ngadiuba, J.; Pinna, D.; Robmann, P.; Taroni, S.; Yang, Y.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Kaestli, H.-C.; Kotlinski, D.; Langenegger, U.; Meier, B.; Rohe, T.; Streuli, S.; Chen, P.-H.; Dietz, C.; Fiori, F.; Grundler, U.; Hou, W.-S.; Lu, R.-S.; Moya, M.; Tsai, J.-F.; Tzeng, Y. M.; Cussans, D.; Goldstein, J.; Grimes, M.; Newbold, D.; Hobson, P.; Reid, I. D.; Auzinger, G.; Bainbridge, R.; Dauncey, P.; Hall, G.; James, T.; Magnan, A.-M.; Pesaresi, M.; Raymond, D. M.; Uchida, K.; Durkin, T.; Harder, K.; Shepherd-Themistocleous, C.; Chertok, M.; Conway, J.; Conway, R.; Flores, C.; Lander, R.; Pellett, D.; Ricci-Tam, F.; Squires, M.; Thomson, J.; Yohay, R.; Burt, K.; Ellison, J.; Hanson, G.; Olmedo, M.; Si, W.; Yates, B. R.; Dominguez, A.; Bartek, R.; Bentele, B.; Cumalat, J. P.; Ford, W. T.; Jensen, F.; Johnson, A.; Krohn, M.; Leontsinis, S.; Mulholland, T.; Stenson, K.; Wagner, S. R.; Apresyan, A.; Bolla, G.; Burkett, K.; Butler, J. N.; Canepa, A.; Cheung, H. W. K.; Christian, D.; Cooper, W. E.; Deptuch, G.; Derylo, G.; Gingu, C.; Grünendahl, S.; Hasegawa, S.; Hoff, J.; Howell, J.; Hrycyk, M.; Jindariani, S.; Johnson, M.; Kahlid, F.; Kwan, S.; Lei, C. M.; Lipton, R.; Lopes De Sá, R.; Liu, T.; Los, S.; Matulik, M.; Merkel, P.; Nahn, S.; Prosser, A.; Rivera, R.; Schneider, B.; Sellberg, G.; Shenai, A.; Siehl, K.; Spiegel, L.; Tran, N.; Uplegger, L.; Voirin, E.; Berry, D. R.; Chen, X.; Ennesser, L.; Evdokimov, A.; Gerber, C. E.; Makauda, S.; Mills, C.; Sandoval Gonzalez, I. D.; Alimena, J.; Antonelli, L. J.; Francis, B.; Hart, A.; Hill, C. S.; Parashar, N.; Stupak, J.; Bortoletto, D.; Bubna, M.; Hinton, N.; Jones, M.; Miller, D. H.; Shi, X.; Baringer, P.; Bean, A.; Khalil, S.; Kropivnitskaya, A.; Majumder, D.; Schmitz, E.; Wilson, G.; Ivanov, A.; Mendis, R.; Mitchell, T.; Skhirtladze, N.; Taylor, R.; Anderson, I.; Fehling, D.; Gritsan, A.; Maksimovic, P.; Martin, C.; Nash, K.; Osherson, M.; Swartz, M.; Xiao, M.; Acosta, J. G.; Cremaldi, L. M.; Oliveros, S.; Perera, L.; Summers, D.; Bloom, K.; Claes, D. R.; Fangmeier, C.; Gonzalez Suarez, R.; Monroy, J.; Siado, J.; Bartz, E.; Gershtein, Y.; Halkiadakis, E.; Kyriacou, S.; Lath, A.; Nash, K.; Osherson, M.; Schnetzer, S.; Stone, R.; Walker, M.; Malik, S.; Norberg, S.; Ramirez Vargas, J. E.; Alyari, M.; Dolen, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Kharchilava, A.; Nguyen, D.; Parker, A.; Rappoccio, S.; Roozbahani, B.; Alexander, J.; Chaves, J.; Chu, J.; Dittmer, S.; McDermott, K.; Mirman, N.; Rinkevicius, A.; Ryd, A.; Salvati, E.; Skinnari, L.; Soffi, L.; Tao, Z.; Thom, J.; Tucker, J.; Zientek, M.; Akgün, B.; Ecklund, K. M.; Kilpatrick, M.; Nussbaum, T.; Zabel, J.; D'Angelo, P.; Johns, W.; Rose, K.; Choudhury, S.; Korol, I.; Seitz, C.; Vargas Trevino, A.; Dolinska, G.
2017-05-01
A new pixel detector for the CMS experiment was built in order to cope with the instantaneous luminosities anticipated for the Phase I Upgrade of the LHC . The new CMS pixel detector provides four-hit tracking with a reduced material budget as well as new cooling and powering schemes. A new front-end readout chip mitigates buffering and bandwidth limitations, and allows operation at low comparator thresholds. In this paper, comprehensive test beam studies are presented, which have been conducted to verify the design and to quantify the performance of the new detector assemblies in terms of tracking efficiency and spatial resolution. Under optimal conditions, the tracking efficiency is 99.95 ± 0.05%, while the intrinsic spatial resolutions are 4.80 ± 0.25 μm and 7.99 ± 0.21 μm along the 100 μm and 150 μm pixel pitch, respectively. The findings are compared to a detailed Monte Carlo simulation of the pixel detector and good agreement is found.
First Results of an “Artificial Retina” Processor Prototype
Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro; ...
2016-11-15
We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. Also, the prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHzmore » crossing rate.« less
First Results of an “Artificial Retina” Processor Prototype
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro
We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called “artificial retina algorithm”, inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. Also, the prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHzmore » crossing rate.« less
77 FR 27797 - Request for Certification of Compliance-Rural Industrialization Loan and Grant Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-11
... 4279-2) for the following: Applicant/Location: Samoa Tuna Processors, Inc. Principal Product/Purpose... improvements, replace machinery and equipment and utilities repairs and other upgrades. The plant will be...
NASA Astrophysics Data System (ADS)
Pribil, Klaus; Flemmig, Joerg
1994-09-01
This paper gives an overview on the current development status of the SOLACOS program and presents the highlights of the program. SOLACOS (Solid State Laser Communications in Space) is the national German program to develop a high performance laser communication system for high data rate transmission between LEO and GEO satellites (Inter Orbit Link, IOL). Two experimental demonstrator terminals are designed and developed in the SOLACOS program. The main development objectives are the Pointing Acquisition and Tracking subsystem (PAT) and the high data rate communication system. All key subsystems and components are straightway developed to be upgraded in follow- on projects to full space qualification. The main design objective for the system is a high degree of modularity which allows to easily upgrade the system with new upcoming technologies. Therefore, all main subsystems are interconnected via fibers to ease replacement of subsystems. The system implements an asymmetric data link with a 650 MBit/s return channel and a 10 MBit/s forward channel. The 650 MBit/s channel is based on a diode pumped Nd:YAG, Integrated Optics Modulator and uses the syncbit transmission scheme. In the syncbit system synchronization information which is necessary to maintain phase lock of the local oscillator of the coherent receiver is transmitted time multiplexed into the data stream. The PAT system comprises two beam detection sensors and three beam steering elements. For initial acquisition and tracking of the remote satellite a high speed CCD camera with an integrated image processing unit, the Acquisition and Tracking Sensor (ATS) is used. In the tacking mode the beam position is sensed via the Fibernutator sensor which is also used to couple the incoming signal into the receiver fiber. Incoming and outgoing beams are routed through the telescopes which are positioned with a 2 axis gimbal mechanism and a high speed beam steering mirror. The PAT system is controlled by a digital signal processor. For beam control advanced PAT algorithms are under development.
A digital video tracking system
NASA Astrophysics Data System (ADS)
Giles, M. K.
1980-01-01
The Real-Time Videotheodolite (RTV) was developed in connection with the requirement to replace film as a recording medium to obtain the real-time location of an object in the field-of-view (FOV) of a long focal length theodolite. Design philosophy called for a system capable of discriminatory judgment in identifying the object to be tracked with 60 independent observations per second, capable of locating the center of mass of the object projection on the image plane within about 2% of the FOV in rapidly changing background/foreground situations, and able to generate a predicted observation angle for the next observation. A description is given of a number of subsystems of the RTV, taking into account the processor configuration, the video processor, the projection processor, the tracker processor, the control processor, and the optics interface and imaging subsystem.
Processing techniques for software based SAR processors
NASA Technical Reports Server (NTRS)
Leung, K.; Wu, C.
1983-01-01
Software SAR processing techniques defined to treat Shuttle Imaging Radar-B (SIR-B) data are reviewed. The algorithms are devised for the data processing procedure selection, SAR correlation function implementation, multiple array processors utilization, cornerturning, variable reference length azimuth processing, and range migration handling. The Interim Digital Processor (IDP) originally implemented for handling Seasat SAR data has been adapted for the SIR-B, and offers a resolution of 100 km using a processing procedure based on the Fast Fourier Transformation fast correlation approach. Peculiarities of the Seasat SAR data processing requirements are reviewed, along with modifications introduced for the SIR-B. An Advanced Digital SAR Processor (ADSP) is under development for use with the SIR-B in the 1986 time frame as an upgrade for the IDP, which will be in service in 1984-5.
Fuzzy logic particle tracking velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1993-01-01
Fuzzy logic has proven to be a simple and robust method for process control. Instead of requiring a complex model of the system, a user defined rule base is used to control the process. In this paper the principles of fuzzy logic control are applied to Particle Tracking Velocimetry (PTV). Two frames of digitally recorded, single exposure particle imagery are used as input. The fuzzy processor uses the local particle displacement information to determine the correct particle tracks. Fuzzy PTV is an improvement over traditional PTV techniques which typically require a sequence (greater than 2) of image frames for accurately tracking particles. The fuzzy processor executes in software on a PC without the use of specialized array or fuzzy logic processors. A pair of sample input images with roughly 300 particle images each, results in more than 200 velocity vectors in under 8 seconds of processing time.
The CMS Level-1 Calorimeter Trigger for LHC Run II
NASA Astrophysics Data System (ADS)
Sinthuprasith, Tutanon
2017-01-01
The phase-1 upgrades of the CMS Level-1 calorimeter trigger have been completed. The Level-1 trigger has been fully commissioned and it will be used by CMS to collect data starting from the 2016 data run. The new trigger has been designed to improve the performance at high luminosity and large number of simultaneous inelastic collisions per crossing (pile-up). For this purpose it uses a novel design, the Time Multiplexed Design, which enables the data from an event to be processed by a single trigger processor at full granularity over several bunch crossings. The TMT design is a modular design based on the uTCA standard. The architecture is flexible and the number of trigger processors can be expanded according to the physics needs of CMS. Intelligent, more complex, and innovative algorithms are now the core of the first decision layer of CMS: the upgraded trigger system implements pattern recognition and MVA (Boosted Decision Tree) regression techniques in the trigger processors for pT assignment, pile up subtraction, and isolation requirements for electrons, and taus. The performance of the TMT design and the latency measurements and the algorithm performance which has been measured using data is also presented here.
Environmentally adaptive processing for shallow ocean applications: A sequential Bayesian approach.
Candy, J V
2015-09-01
The shallow ocean is a changing environment primarily due to temperature variations in its upper layers directly affecting sound propagation throughout. The need to develop processors capable of tracking these changes implies a stochastic as well as an environmentally adaptive design. Bayesian techniques have evolved to enable a class of processors capable of performing in such an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean environment. A solution to this problem is addressed by developing a sequential Bayesian processor capable of providing a joint solution to the modal function tracking and environmental adaptivity problem. Here, the focus is on the development of both a particle filter and an unscented Kalman filter capable of providing reasonable performance for this problem. These processors are applied to hydrophone measurements obtained from a vertical array. The adaptivity problem is attacked by allowing the modal coefficients and/or wavenumbers to be jointly estimated from the noisy measurement data along with tracking of the modal functions while simultaneously enhancing the noisy pressure-field measurements.
Research of TREETOPS Structural Dynamics Controls Simulation Upgrade
NASA Technical Reports Server (NTRS)
Yates, Rose M.
1996-01-01
Under the provisions of contract number NAS8-40194, which was entitled 'TREETOPS Structural Dynamics and Controls Simulation System Upgrade', Oakwood College contracted to produce an upgrade to the existing TREETOPS suite of analysis tools. This suite includes the main simulation program, TREETOPS, two interactive preprocessors, TREESET and TREEFLX, an interactive post processor, TREEPLOT, and an adjunct program, TREESEL. A 'Software Design Document', which provides descriptions of the argument lists and internal variables for each subroutine in the TREETOPS suite, was established. Additionally, installation guides for both DOS and UNIX platforms were developed. Finally, updated User's Manuals, as well as a Theory Manual, were generated.
Online Event Reconstruction in the CBM Experiment at FAIR
NASA Astrophysics Data System (ADS)
Akishina, Valentina; Kisel, Ivan
2018-02-01
Targeting for rare observables, the CBM experiment will operate at high interaction rates of up to 10 MHz, which is unprecedented in heavy-ion experiments so far. It requires a novel free-streaming readout system and a new concept of data processing. The huge data rates of the CBM experiment will be reduced online to the recordable rate before saving the data to the mass storage. Full collision reconstruction and selection will be performed online in a dedicated processor farm. In order to make an efficient event selection online a clean sample of particles has to be provided by the reconstruction package called First Level Event Selection (FLES). The FLES reconstruction and selection package consists of several modules: track finding, track fitting, event building, short-lived particles finding, and event selection. Since detector measurements contain also time information, the event building is done at all stages of the reconstruction process. The input data are distributed within the FLES farm in a form of time-slices. A time-slice is reconstructed in parallel between processor cores. After all tracks of the whole time-slice are found and fitted, they are collected into clusters of tracks originated from common primary vertices, which then are fitted, thus identifying the interaction points. Secondary tracks are associated with primary vertices according to their estimated production time. After that short-lived particles are found and the full event building process is finished. The last stage of the FLES package is a selection of events according to the requested trigger signatures. The event reconstruction procedure and the results of its application to simulated collisions in the CBM detector setup are presented and discussed in detail.
Friedmann, Simon; Frémaux, Nicolas; Schemmel, Johannes; Gerstner, Wulfram; Meier, Karlheinz
2013-01-01
In this study, we propose and analyze in simulations a new, highly flexible method of implementing synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. The study focuses on globally modulated STDP, as a special use-case of this method. Flexibility is achieved by embedding a general-purpose processor dedicated to plasticity into the wafer. To evaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spike train learning task. A single layer of neurons is trained to fire at specific points in time with only the reward as feedback. This model is simulated to measure its performance, i.e., the increase in received reward after learning. Using this performance as baseline, we then simulate the model with various constraints imposed by the proposed implementation and compare the performance. The simulated constraints include discretized synaptic weights, a restricted interface between analog synapses and embedded processor, and mismatch of analog circuits. We find that probabilistic updates can increase the performance of low-resolution weights, a simple interface between analog synapses and processor is sufficient for learning, and performance is insensitive to mismatch. Further, we consider communication latency between wafer and the conventional control computer system that is simulating the environment. This latency increases the delay, with which the reward is sent to the embedded processor. Because of the time continuous operation of the analog synapses, delay can cause a deviation of the updates as compared to the not delayed situation. We find that for highly accelerated systems latency has to be kept to a minimum. This study demonstrates the suitability of the proposed implementation to emulate the selected reward modulated STDP learning rule. It is therefore an ideal candidate for implementation in an upgraded version of the wafer-scale system developed within the BrainScaleS project.
Friedmann, Simon; Frémaux, Nicolas; Schemmel, Johannes; Gerstner, Wulfram; Meier, Karlheinz
2013-01-01
In this study, we propose and analyze in simulations a new, highly flexible method of implementing synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. The study focuses on globally modulated STDP, as a special use-case of this method. Flexibility is achieved by embedding a general-purpose processor dedicated to plasticity into the wafer. To evaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spike train learning task. A single layer of neurons is trained to fire at specific points in time with only the reward as feedback. This model is simulated to measure its performance, i.e., the increase in received reward after learning. Using this performance as baseline, we then simulate the model with various constraints imposed by the proposed implementation and compare the performance. The simulated constraints include discretized synaptic weights, a restricted interface between analog synapses and embedded processor, and mismatch of analog circuits. We find that probabilistic updates can increase the performance of low-resolution weights, a simple interface between analog synapses and processor is sufficient for learning, and performance is insensitive to mismatch. Further, we consider communication latency between wafer and the conventional control computer system that is simulating the environment. This latency increases the delay, with which the reward is sent to the embedded processor. Because of the time continuous operation of the analog synapses, delay can cause a deviation of the updates as compared to the not delayed situation. We find that for highly accelerated systems latency has to be kept to a minimum. This study demonstrates the suitability of the proposed implementation to emulate the selected reward modulated STDP learning rule. It is therefore an ideal candidate for implementation in an upgraded version of the wafer-scale system developed within the BrainScaleS project. PMID:24065877
L1 track trigger for the CMS HL-LHC upgrade using AM chips and FPGAs
NASA Astrophysics Data System (ADS)
Fedi, Giacomo
2017-08-01
The increase of luminosity at the HL-LHC will require the introduction of tracker information in CMS's Level-1 trigger system to maintain an acceptable trigger rate when selecting interesting events, despite the order of magnitude increase in minimum bias interactions. To meet the latency requirements, dedicated hardware has to be used. This paper presents the results of tests of a prototype system (pattern recognition ezzanine) as core of pattern recognition and track fitting for the CMS experiment, combining the power of both associative memory custom ASICs and modern Field Programmable Gate Array (FPGA) devices. The mezzanine uses the latest available associative memory devices (AM06) and the most modern Xilinx Ultrascale FPGAs. The results of the test for a complete tower comprising about 0.5 million patterns is presented, using as simulated input events traversing the upgraded CMS detector. The paper shows the performance of the pattern matching, track finding and track fitting, along with the latency and processing time needed. The pT resolution over pT of the muons measured using the reconstruction algorithm is at the order of 1% in the range 3-100 GeV/c.
Castel, Anne-Laure; Menet, Aymeric; Ennezat, Pierre-Vladimir; Delelis, François; Le Goffic, Caroline; Binda, Camille; Guerbaai, Raphaëlle-Ashley; Levy, Franck; Graux, Pierre; Tribouilloy, Christophe; Maréchaux, Sylvestre
2016-01-01
Speckle tracking can be used to measure left ventricular global longitudinal strain (GLS). To study the effect of speckle tracking software product upgrades on GLS values and intervendor consistency. Subjects (patients or healthy volunteers) underwent systematic echocardiography with equipment from Philips and GE, without a change in their position. Off-line post-processing for GLS assessment was performed with the former and most recent upgrades from these two vendors (Philips QLAB 9.0 and 10.2; GE EchoPAC 12.1 and 13.1.1). GLS was obtained in three myocardial layers with EchoPAC 13.1.1. Intersoftware and intervendor consistency was assessed. Interobserver variability was tested in a subset of patients. Among 73 subjects (65 patients and 8 healthy volunteers), absolute values of GLS were higher with QLAB 10.2 compared with 9.0 (intraclass correlation coefficient [ICC]: 0.88; bias: 2.2%). Agreement between EchoPAC 13.1.1 and 12.1 varied by myocardial layer (13.1.1 only): midwall (ICC: 0.95; bias: -1.1%), endocardium (ICC: 0.93; bias: 1.6%) and epicardial (ICC: 0.80; bias: -3.3%). Although GLS was comparable for QLAB 9.0 versus EchoPAC 12.1 (ICC: 0.95; bias: 0.5%), the agreement was lower between QLAB 10.2 and EchoPAC 13.1.1 endocardial (ICC: 0.91; bias: 1.1%), midwall (ICC: 0.73; bias: 3.9%) and epicardial (ICC: 0.54; bias: 6.0%). Interobserver variability of all software products in a subset of 20 patients was excellent (ICC: 0.97-0.99; bias: -0.8 to 1.0%). Upgrades of speckle tracking software may be associated with significant changes in GLS values, which could affect intersoftware and intervendor consistency. This finding has important clinical implications for the longitudinal follow-up of patients with speckle tracking echocardiography. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
A Cost Effective System Design Approach for Critical Space Systems
NASA Technical Reports Server (NTRS)
Abbott, Larry Wayne; Cox, Gary; Nguyen, Hai
2000-01-01
NASA-JSC required an avionics platform capable of serving a wide range of applications in a cost-effective manner. In part, making the avionics platform cost effective means adhering to open standards and supporting the integration of COTS products with custom products. Inherently, operation in space requires low power, mass, and volume while retaining high performance, reconfigurability, scalability, and upgradability. The Universal Mini-Controller project is based on a modified PC/104-Plus architecture while maintaining full compatibility with standard COTS PC/104 products. The architecture consists of a library of building block modules, which can be mixed and matched to meet a specific application. A set of NASA developed core building blocks, processor card, analog input/output card, and a Mil-Std-1553 card, have been constructed to meet critical functions and unique interfaces. The design for the processor card is based on the PowerPC architecture. This architecture provides an excellent balance between power consumption and performance, and has an upgrade path to the forthcoming radiation hardened PowerPC processor. The processor card, which makes extensive use of surface mount technology, has a 166 MHz PowerPC 603e processor, 32 Mbytes of error detected and corrected RAM, 8 Mbytes of Flash, and I Mbytes of EPROM, on a single PC/104-Plus card. Similar densities have been achieved with the quad channel Mil-Std-1553 card and the analog input/output cards. The power management built into the processor and its peripheral chip allows the power and performance of the system to be adjusted to meet the requirements of the application, allowing another dimension to the flexibility of the Universal Mini-Controller. Unique mechanical packaging allows the Universal Mini-Controller to accommodate standard COTS and custom oversized PC/104-Plus cards. This mechanical packaging also provides thermal management via conductive cooling of COTS boards, which are typically designed for convection cooling methods.
Test beam performance measurements for the Phase I upgrade of the CMS pixel detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dragicevic, M.; Friedl, M.; Hrubec, J.
A new pixel detector for the CMS experiment was built in order to cope with the instantaneous luminosities anticipated for the Phase~I Upgrade of the LHC. The new CMS pixel detector provides four-hit tracking with a reduced material budget as well as new cooling and powering schemes. A new front-end readout chip mitigates buffering and bandwidth limitations, and allows operation at low comparator thresholds. Here in this paper, comprehensive test beam studies are presented, which have been conducted to verify the design and to quantify the performance of the new detector assemblies in terms of tracking efficiency and spatial resolution. Under optimal conditions, the tracking efficiency ismore » $$99.95\\pm0.05\\,\\%$$, while the intrinsic spatial resolutions are $$4.80\\pm0.25\\,\\mu \\mathrm{m}$$ and $$7.99\\pm0.21\\,\\mu \\mathrm{m}$$ along the $$100\\,\\mu \\mathrm{m}$$ and $$150\\,\\mu \\mathrm{m}$$ pixel pitch, respectively. The findings are compared to a detailed Monte Carlo simulation of the pixel detector and good agreement is found.« less
Test beam performance measurements for the Phase I upgrade of the CMS pixel detector
Dragicevic, M.; Friedl, M.; Hrubec, J.; ...
2017-05-30
A new pixel detector for the CMS experiment was built in order to cope with the instantaneous luminosities anticipated for the Phase~I Upgrade of the LHC. The new CMS pixel detector provides four-hit tracking with a reduced material budget as well as new cooling and powering schemes. A new front-end readout chip mitigates buffering and bandwidth limitations, and allows operation at low comparator thresholds. Here in this paper, comprehensive test beam studies are presented, which have been conducted to verify the design and to quantify the performance of the new detector assemblies in terms of tracking efficiency and spatial resolution. Under optimal conditions, the tracking efficiency ismore » $$99.95\\pm0.05\\,\\%$$, while the intrinsic spatial resolutions are $$4.80\\pm0.25\\,\\mu \\mathrm{m}$$ and $$7.99\\pm0.21\\,\\mu \\mathrm{m}$$ along the $$100\\,\\mu \\mathrm{m}$$ and $$150\\,\\mu \\mathrm{m}$$ pixel pitch, respectively. The findings are compared to a detailed Monte Carlo simulation of the pixel detector and good agreement is found.« less
Traditional Tracking with Kalman Filter on Parallel Architectures
NASA Astrophysics Data System (ADS)
Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; MacNeill, Ian; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2015-05-01
Power density constraints are limiting the performance improvements of modern CPUs. To address this, we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The most common track finding techniques in use today are however those based on the Kalman Filter. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. We report the results of our investigations into the potential and limitations of these algorithms on the new parallel hardware.
System considerations for detection and tracking of small targets using passive sensors
NASA Astrophysics Data System (ADS)
DeBell, David A.
1991-08-01
Passive sensors provide only a few discriminants to assist in threat assessment of small targets. Tracking of the small targets provides additional discriminants. This paper discusses the system considerations for tracking small targets using passive sensors, in particular EO sensors. Tracking helps establish good versus bad detections. Discussed are the requirements to be placed on the sensor system's accuracy, with respect to knowledge of the sightline direction. The detection of weak targets sets a requirement for two levels of tracking in order to reduce processor throughput. A system characteristic is the need to track all detections. For low thresholds, this can mean a heavy track burden. Therefore, thresholds must be adaptive in order not to saturate the processors. Second-level tracks must develop a range estimate in order to assess threat. Sensor platform maneuvers are required if the targets are moving. The need for accurate pointing, good stability, and a good update rate will be shown quantitatively, relating to track accuracy and track association.
Space Shuttle avionics upgrade - Issues and opportunities
NASA Astrophysics Data System (ADS)
Swaim, Richard A.; Wingert, William B.
An overview is conducted of existing Space Shuttle avionics and the possibilities for upgrading the cockpit to reduce costs and increase functionability. The current avionics include five general-purpose computers fitted with multifunction displays, dedicated switches and indicators, and dedicated flight instruments. The operational needs of the Shuttle are reviewed in the light of the avionics and potential upgrades in the form of microprocessors and display systems. The use of better processors can provide hardware support for multitasking and memory management and can reduce the life-cycle cost for software. Some limitations of the current technology are acknowledged including the Shuttle's power budget and structural configuration. A phased infusion of upgraded avionics is proposed that provides a functionally transparent replacement of crew-interface equipment as well as the addition of interface enhancements and the migration of selected functions.
Model-based Robotic Dynamic Motion Control for the Robonaut 2 Humanoid Robot
NASA Technical Reports Server (NTRS)
Badger, Julia M.; Hulse, Aaron M.; Taylor, Ross C.; Curtis, Andrew W.; Gooding, Dustin R.; Thackston, Allison
2013-01-01
Robonaut 2 (R2), an upper-body dexterous humanoid robot, has been undergoing experimental trials on board the International Space Station (ISS) for more than a year. R2 will soon be upgraded with two climbing appendages, or legs, as well as a new integrated model-based control system. This control system satisfies two important requirements; first, that the robot can allow humans to enter its workspace during operation and second, that the robot can move its large inertia with enough precision to attach to handrails and seat track while climbing around the ISS. This is achieved by a novel control architecture that features an embedded impedance control law on the motor drivers called Multi-Loop control which is tightly interfaced with a kinematic and dynamic coordinated control system nicknamed RoboDyn that resides on centralized processors. This paper presents the integrated control algorithm as well as several test results that illustrate R2's safety features and performance.
ERIC Educational Resources Information Center
Banke, Ron; Di Gennaro, Guy; Ediger, Rick; Garner, Lanny; Hersom, Steve; Miller, Jack; Nemeth, Ron; Petrucelli, Jim; Sierks, Donna; Smith, Don; Swank, Kevin; West, Kevin
This book establishes guidelines for the construction and maintenance of tracks by providing information for building new tracks or upgrading existing tracks. Subjects covered include running track planning and construction, physical layout, available surfaces, and maintenance. General track requirements and construction specifications are…
An observatory control system for the University of Hawai'i 2.2m Telescope
NASA Astrophysics Data System (ADS)
McKay, Luke; Erickson, Christopher; Mukensnable, Donn; Stearman, Anthony; Straight, Brad
2016-07-01
The University of Hawai'i 2.2m telescope at Maunakea has operated since 1970, and has had several controls upgrades to date. The newest system will operate as a distributed hierarchy of GNU/Linux central server, networked single-board computers, microcontrollers, and a modular motion control processor for the main axes. Rather than just a telescope control system, this new effort is towards a cohesive, modular, and robust whole observatory control system, with design goals of fully robotic unattended operation, high reliability, and ease of maintenance and upgrade.
Robonaut 2 - Building a Robot on the International Space Station
NASA Technical Reports Server (NTRS)
Diftler, Myron; Badger, Julia; Joyce, Charles; Potter, Elliott; Pike, Leah
2015-01-01
In 2010, the Robonaut Project embarked on a multi-phase mission to perform technology demonstrations on-board the International Space Station (ISS), showcasing state of the art robotics technologies through the use of Robonaut 2 (R2). This phased approach implements a strategy that allows for the use of ISS as a test bed during early development to both demonstrate capability and test technology while still making advancements in the earth based laboratories for future testing and operations in space. While R2 was performing experimental trials onboard the ISS during the first phase, engineers were actively designing for Phase 2, Intra-Vehicular Activity (IVA) Mobility, that utilizes a set of zero-g climbing legs outfitted with grippers to grasp handrails and seat tracks. In addition to affixing the new climbing legs to the existing R2 torso, it became clear that upgrades to the torso to both physically accommodate the climbing legs and to expand processing power and capabilities of the robot were required. In addition to these upgrades, a new safety architecture was also implemented in order to account for the expanded capabilities of the robot. The IVA climbing legs not only needed to attach structurally to the R2 torso on ISS, but also required power and data connections that did not exist in the upper body. The climbing legs were outfitted with a blind mate adapter and coarse alignment guides for easy installation, but the upper body required extensive rewiring to accommodate the power and data connections. This was achieved by mounting a custom adapter plate to the torso and routing the additional wiring through the waist joint to connect to the new set of processors. In addition to the power and data channels, the integrated unit also required updated electronics boards, additional sensors and updated processors to accommodate a new operating system, software platform, and custom control system. In order to perform the unprecedented task of building a robot in space, extensive practice sessions and meticulous procedures were required. Since crew training time is at a premium, the R2 team took a skills-based training approach to ensure the astronauts were proficient with a basic skill set while refining the detailed procedures over several practice sessions and simulations. In addition to the crew activities, meticulous ground procedures were required in order to upgrade firmware on the upper body motor drivers. The new firmware for the IVA mobility unit needed to be deployed using the old software system. This also provided an opportunity to upgrade the upper body joints with new software and allowed for limited insight into the success of the updates. Complete verification that the updated firmware was successfully loaded was not confirmed until the rewiring of the upper body torso was complete.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez, Jonatan Piedra
2005-04-21
The new trigger processor, the Silicon Vertex Tracking (SVT), has dramatically improved the B physics capabilities of the upgraded CDF II Detector; for the first time in a hadron collider, the SVT has enabled the access to non-lepton-triggered B meson decays. Within the new available range of decay modes, the Bmore » $$0\\atop{s}$$ → D$$-\\atop{s}$$π + signature is of paramount importance in the measurement of the Δm s mixing frequency. The analysis reported here is a step towards the measurement of this frequency; two where our goals: carrying out the absolute calibration of the opposite side flavor taggers, used in the Δm s measurement; and measuring the B$$0\\atop{d}$$ mixing frequency in a B → Dπ sample, establishing the feasibility of the mixing measurement in this sample whose decay-length is strongly biased by the selective SVT trigger. We analyze a total integrated luminosity of 355 pb -1 collected with the CDF II Detector. By triggering on muons, using the conventional di-muon trigger; or displaced tracks, using the SVT trigger, we gather a sample rich in bottom and charm mesons.« less
The artificial retina for track reconstruction at the LHC crossing rate
NASA Astrophysics Data System (ADS)
Abba, A.; Bedeschi, F.; Citterio, M.; Caponio, F.; Cusimano, A.; Geraci, A.; Marino, P.; Morello, M. J.; Neri, N.; Punzi, G.; Piucci, A.; Ristori, L.; Spinella, F.; Stracka, S.; Tonelli, D.
2016-04-01
We present the results of an R&D study for a specialized processor capable of precisely reconstructing events with hundreds of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus suitable for processing LHC events at the full crossing frequency. For this purpose we design and test a massively parallel pattern-recognition algorithm, inspired to the current understanding of the mechanisms adopted by the primary visual cortex of mammals in the early stages of visual-information processing. The detailed geometry and charged-particle's activity of a large tracking detector are simulated and used to assess the performance of the artificial retina algorithm. We find that high-quality tracking in large detectors is possible with sub-microsecond latencies when the algorithm is implemented in modern, high-speed, high-bandwidth FPGA devices.
NAVO MSRC Navigator. Spring 2006
2006-01-01
all of these upgrades are complete, the effective computing power of the NAVO MSRC will be essentially tripled, as measured by sustainable ... performance on the HPCMP benchmark suite. All four of these systems will be configured with two gigabytes of memory per processor, IBM’s “Federation” inter
Modular Filter and Source-Management Upgrade of RADAC
NASA Technical Reports Server (NTRS)
Lanzi, R. James; Smith, Donna C.
2007-01-01
In an upgrade of the Range Data Acquisition Computer (RADAC) software, a modular software object library was developed to implement required functionality for filtering of flight-vehicle-tracking data and management of tracking-data sources. (The RADAC software is used to process flight-vehicle metric data for realtime display in the Wallops Flight Facility Range Control Center and Mobile Control Center.)
Benefit of the UltraZoom beamforming technology in noise in cochlear implant users.
Mosnier, Isabelle; Mathias, Nathalie; Flament, Jonathan; Amar, Dorith; Liagre-Callies, Amelie; Borel, Stephanie; Ambert-Dahan, Emmanuèle; Sterkers, Olivier; Bernardeschi, Daniele
2017-09-01
The objectives of the study were to demonstrate the audiological and subjective benefits of the adaptive UltraZoom beamforming technology available in the Naída CI Q70 sound processor, in cochlear-implanted adults upgraded from a previous generation sound processor. Thirty-four adults aged between 21 and 89 years (mean 53 ± 19) were prospectively included. Nine subjects were unilaterally implanted, 11 bilaterally and 14 were bimodal users. The mean duration of cochlear implant use was 7 years (range 5-15 years). Subjects were tested in quiet with monosyllabic words and in noise with the adaptive French Matrix test in the best-aided conditions. The test setup contained a signal source in front of the subject and three noise sources at +/-90° and 180°. The noise was presented at a fixed level of 65 dB SPL and the level of speech signal was varied to obtain the speech reception threshold (SRT). During the upgrade visit, subjects were tested with the Harmony and with the Naída CI sound processors in omnidirectional microphone configuration. After a take-home phase of 2 months, tests were repeated with the Naída CI processor with and without UltraZoom. Subjective assessment of the sound quality in daily environments was recorded using the APHAB questionnaire. No difference in performance was observed in quiet between the two processors. The Matrix test in noise was possible in the 21 subjects with the better performance. No difference was observed between the two processors for performance in noise when using the omnidirectional microphone. At the follow-up session, the median SRT with the Naída CI processor with UltraZoom was -4 dB compared to -0.45 dB without UltraZoom. The use of UltraZoom improved the median SRT by 3.6 dB (p < 0.0001, Wilcoxon paired test). When looking at the APHAB outcome, improvement was observed for speech understanding in noisy environments (p < 0.01) and in aversive situations (p < 0.05) in the group of 21 subjects who were able to perform the Matrix test in noise and for speech understanding in noise (p < 0.05) in the group of 13 subjects with the poorest performance, who were not able to perform the Matrix test in noise. The use of UltraZoom beamforming technology, available on the new sound processor Naída CI, improves speech performance in difficult and realistic noisy conditions when the cochlear implant user needs to focus on the person speaking at the front. Using the APHAB questionnaire, a subjective benefit for listening in background noise was also observed in subjects with good performance as well as in those with poor performance. This study highlighted the importance of upgrading CI recipients to new technology and to include assessment in noise and subjective feedback evaluation as part of the process.
Utilizing the ISS Mission as a Testbed to Develop Cognitive Communications Systems
NASA Technical Reports Server (NTRS)
Jackson, Dan
2016-01-01
The ISS provides an excellent opportunity for pioneering artificial intelligence software to meet the challenges of real-time communications (comm) link management. This opportunity empowers the ISS Program to forge a testbed for developing cognitive communications systems for the benefit of the ISS mission, manned Low Earth Orbit (LEO) science programs and future planetary exploration programs. In November, 1998, the Flight Operations Directorate (FOD) started the ISS Antenna Manager (IAM) project to develop a single processor supporting multiple comm satellite tracking for two different antenna systems. Further, the processor was developed to be highly adaptable as it supported the ISS mission through all assembly stages. The ISS mission mandated communications specialists with complete knowledge of when the ISS was about to lose or gain comm link service. The current specialty mandated cognizance of large sun-tracking solar arrays and thermal management panels in addition to the highly-dynamic satellite service schedules and rise/set tables. This mission requirement makes the ISS the ideal communications management analogue for future LEO space station and long-duration planetary exploration missions. Future missions, with their precision-pointed, dynamic, laser-based comm links, require complete autonomy for managing high-data rate communications systems. Development of cognitive communications management systems that permit any crew member or payload science specialist, regardless of experience level, to control communications is one of the greater benefits the ISS can offer new space exploration programs. The IAM project met a new mission requirement never previously levied against US space-born communications systems management: process and display the orientation of large solar arrays and thermal control panels based on real-time joint angle telemetry. However, IAM leaves the actual communications availability assessment to human judgment, which introduces unwanted variability because each specialist has a different core of experience with comm link performance. Because the ISS utilizes two different frequency bands, dynamic structure can be occasionally translucent at one frequency while it can completely interdict service at the other frequency. The impact of articulating structure on the comm link can depend on its orientation at the time it impinges on the link. It can become easy for a human specialist to cross-associate experience at one frequency with experience at the other frequency. Additionally, the specialist's experience is incremental, occurring one nine-hour shift at a time. Only the IAM processor experiences the complete 24x7x365 communications link performance for both communications links but, it has no "learning capability." If the IAM processor could be endowed with a cognitive ability to remember past structure-induced comm link outages, based on its knowledge of the ISS position, attitude, communications gear, array joint angles and tracking accuracy, it could convey such experience to the human operator. It could also use its learned communications link behaviors to accurately convey the availability of future communications sessions. Further, the tool could remember how accurately or inaccurately it predicted availability and correct future predictions based on past performance. The IAM tool could learn frequency-specific impacts due to spacecraft structures and pass that information along as "experience." Such development would provide a single artificial intelligence processor that could provide two different experience bases. If it also "knew" the satellite service schedule, it could distinguish structure blockage from schedule or planet blockage and then quickly switch to another satellite. Alternatively, just as a human operator could judge, a cognizant comm system based on the IAM model could "know" that the blockage is not going to last very long and continue tracking a comm satellite, waiting for it to track away from structure. Ultimately, once this capability was fully developed and tested in the Mission Control Center, it could be transferred on-orbit to support development of operations concepts that include more advanced cognitive communications systems. Future applications of this capability are easily foreseen because even more dynamic satellite constellations with more nodes and greater capability are coming. Currently, the ISS fully employs a 300 million bit-per-second (Mbps) return link for harvesting payload science. In the coming eighteen months, it will step up to 600 Mbps. Already there is talk of a 1.2 billion bit-per-second (Gbps) upgrade for the ISS and laser comm links have already been tested from the ISS. Every data rate upgrade mandates more complicated and sensitive communications equipment which implies greater expertise invested in the human operator. Future on-orbit cognizant comm systems will be needed to meet greater performance demands aboard larger, far more complicated spacecraft. In the LEO environment, the old-style one-satellite-per-spacecraft operations concept will give way to a new concept of a single customer spacecraft simultaneously using multiple comm satellites. Much more highly-dynamic manned LEO missions with decades of crew members potentially increase the demand for communications link performance. A cognizant on-board communications system will meet advanced communications demands from future LEO missions and future planetary missions. The ISS has fledgling components of future exploration programs, both LEO and planetary. Further, the Flight Operations Directorate, through the IAM project, has already begun to develop a communications management system that attempts to solve advanced problems ideally represented by dynamic structure impacting scheduled satellite service. With an earnest project to integrate artificial intelligence into the IAM processor, the ISS Program could develop a cognizant communications system that could be adapted and transferred to future on-orbit avionics designs.
Utilizing the ISS Mission as a Testbed to Develop Cognitive Communications Systems
NASA Technical Reports Server (NTRS)
Jackson, Dan
2016-01-01
The ISS provides an excellent opportunity for pioneering artificial intelligence software to meet the challenges of real-time communications (comm) link management. This opportunity empowers the ISS Program to forge a testbed for developing cognitive communications systems for the benefit of the ISS mission, manned Low Earth Orbit (LEO) science programs and future planetary exploration programs. In November, 1998, the Flight Operations Directorate (FOD) started the ISS Antenna Manager (IAM) project to develop a single processor supporting multiple comm satellite tracking for two different antenna systems. Further, the processor was developed to be highly adaptable as it supported the ISS mission through all assembly stages. The ISS mission mandated communications specialists with complete knowledge of when the ISS was about to lose or gain comm link service. The current specialty mandated cognizance of large sun-tracking solar arrays and thermal management panels in addition to the highly-dynamic satellite service schedules and rise/set tables. This mission requirement makes the ISS the ideal communications management analogue for future LEO space station and long-duration planetary exploration missions. Future missions, with their precision-pointed, dynamic, laser-based comm links, require complete autonomy for managing high-data rate communications systems. Development of cognitive communications management systems that permit any crew member or payload science specialist, regardless of experience level, to control communications is one of the greater benefits the ISS can offer new space exploration programs. The IAM project met a new mission requirement never previously levied against US space-born communications systems management: process and display the orientation of large solar arrays and thermal control panels based on real-time joint angle telemetry. However, IAM leaves the actual communications availability assessment to human judgement, which introduces unwanted variability because each specialist has a different core of experience with comm link performance. Because the ISS utilizes two different frequency bands, dynamic structure can be occasionally translucent at one frequency while it can completely interdict service at the other frequency. The impact of articulating structure on the comm link can depend on its orientation at the time it impinges on the link. It can become easy for a human specialist to cross-associate experience at one frequency with experience at the other frequency. Additionally, the specialist's experience is incremental, occurring one nine-hour shift at a time. Only the IAM processor experiences the complete 24x7x365 communications link performance for both communications links but, it has no "learning capability." If the IAM processor could be endowed with a cognitive ability to remember past structure-induced comm link outages, based on its knowledge of the ISS position, attitude, communications gear, array joint angles and tracking accuracy, it could convey such experience to the human operator. It could also use its learned communications link behaviors to accurately convey the availability of future communications sessions. Further, the tool could remember how accurately or inaccurately it predicted availability and correct future predictions based on past performance. The IAM tool could learn frequency-specific impacts due to spacecraft structures and pass that information along as "experience." Such development would provide a single artificial intelligence processor that could provide two different experience bases. If it also "knew" the satellite service schedule, it could distinguish structure blockage from schedule or planet blockage and then quickly switch to another satellite. Alternatively, just as a human operator could judge, a cognizant comm system based on the IAM model could "know" that the blockage is not going to last very long and continue tracking a comm satellite, waiting for it to track away from structure. Ultimately, once this capability was fully developed and tested in the Mission Control Center, it could be transferred on-orbit to support development of operations concepts that include more advanced cognitive communications systems. Future applications of this capability are easily foreseen because even more dynamic satellite constellations with more nodes and greater capability are coming. Currently, the ISS fully employs its high-data-rate return link for harvesting payload science. In the coming months, it will double that data rate and is forecast to fully utilize that capability. Already there is talk of an upgrade that quadruples the current data rate allocated to ISS payload science before the end of its mission and laser comm links have already been tested from the ISS. Every data rate upgrade mandates more complicated and sensitive communications equipment which implies greater expertise invested in the human operator. Future on-orbit cognizant comm systems will be needed to meet greater performance demands aboard larger, far more complicated spacecraft. In the LEO environment, the old-style one-satellite-per-spacecraft operations concept will give way to a new concept of a single customer spacecraft simultaneously using multiple comm satellites. Much more highly-dynamic manned LEO missions with decades of crew members potentially increase the demand for communications link performance. A cognizant on-board communications system will meet advanced communications demands from future LEO missions and future planetary missions. The ISS has fledgling components of future exploration programs, both LEO and planetary. Further, the Flight Operations Directorate, through the IAM project, has already begun to develop a communications management system that attempts to solve advanced problems ideally represented by dynamic structure impacting scheduled satellite service. With an earnest project to integrate artificial intelligence into the IAM processor, the ISS Program could develop a cognizant communications system that could be adapted and transferred to future on-orbit avionics designs.
The artificial retina processor for track reconstruction at the LHC crossing rate
Abba, A.; Bedeschi, F.; Citterio, M.; ...
2015-03-16
We present results of an R&D study for a specialized processor capable of precisely reconstructing, in pixel detectors, hundreds of charged-particle tracks from high-energy collisions at 40 MHz rate. We apply a highly parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature, and describe in detail an efficient hardware implementation in high-speed, high-bandwidth FPGA devices. This is the first detailed demonstration of reconstruction of offline-quality tracks at 40 MHz and makes the device suitable for processing Large Hadron Collider events at the full crossing frequency.
Trip optimization system and method for a train
Kumar, Ajith Kuttannair; Shaffer, Glenn Robert; Houpt, Paul Kenneth; Movsichoff, Bernardo Adrian; Chan, David So Keung
2017-08-15
A system for operating a train having one or more locomotive consists with each locomotive consist comprising one or more locomotives, the system including a locator element to determine a location of the train, a track characterization element to provide information about a track, a sensor for measuring an operating condition of the locomotive consist, a processor operable to receive information from the locator element, the track characterizing element, and the sensor, and an algorithm embodied within the processor having access to the information to create a trip plan that optimizes performance of the locomotive consist in accordance with one or more operational criteria for the train.
MWPC prototyping and testing for STAR inner TPC upgrade
NASA Astrophysics Data System (ADS)
Shen, F.; Wang, S.; Yang, C.; Xu, Q.
2017-06-01
STAR experiment at the Relativistic Heavy Ion Collider (RHIC) is upgrading the inner sectors of the Time Projection Chamber (iTPC). The iTPC upgrade project will increase the segmentation on the inner pad plane from 13 to 40 pad rows and renew the inner sector wire chambers. The upgrade will expand the TPC's acceptance from |η|<=1.0 to |η|<=1.5. Furthermore, the detector will have better acceptance for tracks with low momentum, as well as better resolution in both momentum and dE/dx for tracks of all momenta. The enhanced measurement capabilities of STAR-iTPC upgrade are crucial to the physics program of the Phase II of Beam Energy Scan (BES-II) at RHIC during 2019-2020, in particular the QCD phase transition study. In this proceedings, I will discuss the iTPC MWPC module fabrication and testing results from the first full size iTPC MWPC pre-prototype made at Shandong University.
Advanced computer architecture specification for automated weld systems
NASA Technical Reports Server (NTRS)
Katsinis, Constantine
1994-01-01
This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.
Kalman Filter Tracking on Parallel Architectures
NASA Astrophysics Data System (ADS)
Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2015-12-01
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques including Cellular Automata or returning to Hough Transform. The most common track finding techniques in use today are however those based on the Kalman Filter [2]. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust and are exactly those being used today for the design of the tracking system for HL-LHC. Our previous investigations showed that, using optimized data structures, track fitting with Kalman Filter can achieve large speedup both with Intel Xeon and Xeon Phi. We report here our further progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a realistic simulation setup.
Burbank works on the EPIC in the Node 2
2012-02-28
ISS030-E-114433 (29 Feb. 2012) --- In the International Space Station?s Destiny laboratory, NASA astronaut Dan Burbank, Expedition 30 commander, upgrades Multiplexer/Demultiplexer (MDM) computers and Portable Computer System (PCS) laptops and installs the Enhanced Processor & Integrated Communications (EPIC) hardware in the Payload 1 (PL-1) MDM.
75 FR 79418 - Request for Certification of Compliance-Rural Industrialization Loan and Grant Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-20
... 4279-2) for the following: Applicant/Location: Mt. Vernon Seafoods, LLC, Burlington, Washington... ship; purchase equipment, materials and machinery; perform upgrades to factory processor and company owned ship; and to create working capital. The office is to be located in Burlington, Washington. The...
NASA Technical Reports Server (NTRS)
Uldomkesmalee, Suraphol; Suddarth, Steven C.
1997-01-01
VIGILANTE is an ultrafast smart sensor testbed for generic Automatic Target Recognition (ATR) applications with a series of capability demonstration focussed on cruise missile defense (CMD). VIGILANTE's sensor/processor architecture is based on next-generation UV/visible/IR sensors and a tera-operations per second sugar-cube processor, as well as supporting airborne vehicle. Excellent results of efficient ATR methodologies that use an eigenvectors/neural network combination and feature-based precision tracking have been demonstrated in the laboratory environment.
Using an ARM Processor to boost data acquisition rates
NASA Astrophysics Data System (ADS)
Brown, Anthony; Seaquest Collaboration
2015-10-01
It has been proposed, Fermilab E-1067, to use the SeaQuest (E906/E1039/1037) dimuon spectrometer to do a search for the dark photon and dark Higgs. The concept is that it would run in a parasitic mode with only minor upgrades to the spectrometer. There are various requirements for the upgrades but one of them is to increase the DAQ rates and one minimal cost approach to do this will be discussed. The currently running SeaQuest (E906) experiment has modest rate requirements of around 1 kHz. Since the dark particle search would involve recording particles originating in the first magnet used as a beam dump, the data rate will be higher than recording events just from the target. Thus the DAQ rate capability will need to be increased to around 10 kHz. There exists a possible very low cost solution as the Academica Sinica designed TDCs contains an ARM processor that was not needed to meet the original SeaQuest (E906 needs). Since the 120 GeV beam from the Main Injector is delivered in a 4 second spill, once per minute and the ARM processor on the TDC has two dual-ported memory chips, these could be used to store data during each spill and then read the data out in the time between spills.
Track finding in ATLAS using GPUs
NASA Astrophysics Data System (ADS)
Mattmann, J.; Schmitt, C.
2012-12-01
The reconstruction and simulation of collision events is a major task in modern HEP experiments involving several ten thousands of standard CPUs. On the other hand the graphics processors (GPUs) have become much more powerful and are by far outperforming the standard CPUs in terms of floating point operations due to their massive parallel approach. The usage of these GPUs could therefore significantly reduce the overall reconstruction time per event or allow for the usage of more sophisticated algorithms. In this paper the track finding in the ATLAS experiment will be used as an example on how the GPUs can be used in this context: the implementation on the GPU requires a change in the algorithmic flow to allow the code to work in the rather limited environment on the GPU in terms of memory, cache, and transfer speed from and to the GPU and to make use of the massive parallel computation. Both, the specific implementation of parts of the ATLAS track reconstruction chain and the performance improvements obtained will be discussed.
Development of CMOS pixel sensors for the upgrade of the ALICE Inner Tracking System
NASA Astrophysics Data System (ADS)
Molnar, L.
2014-12-01
The ALICE Collaboration is preparing a major upgrade of the current detector, planned for installation during the second long LHC shutdown in the years 2018-19, in order to enhance its low-momentum vertexing and tracking capability, and exploit the planned increase of the LHC luminosity with Pb beams. One of the cornerstones of the ALICE upgrade strategy is to replace the current Inner Tracking System in its entirety with a new, high resolution, low-material ITS detector. The new ITS will consist of seven concentric layers equipped with Monolithic Active Pixel Sensors (MAPS) implemented using the 0.18 μm CMOS technology of TowerJazz. In this contribution, the main key features of the ITS upgrade will be illustrated with emphasis on the functionality of the pixel chip. The ongoing developments on the readout architectures, which have been implemented in several fabricated prototypes, will be discussed. The operational features of these prototypes as well as the results of the characterisation tests before and after irradiation will also be presented.
ASR-9 processor augmentation card (9-PAC) phase II scan-scan correlator algorithms
DOT National Transportation Integrated Search
2001-04-26
The report documents the scan-scan correlator (tracker) algorithm developed for Phase II of the ASR-9 Processor Augmentation Card (9-PAC) project. The improved correlation and tracking algorithms in 9-PAC Phase II decrease the incidence of false-alar...
, effects of balloon drift in time and space included Forecast and post processing: Improved orography minor changes: Observations and analysis: Higher resolution sea ice mask Forecast and post processing . 12/04/07 12Z: Use of Unified Post Processor in GFS 12/04/07 12Z: GFS Ensemble (NAEFS/TIGGE) UPGRADE
Due Processors: Educators Seek a Digital Upgrade for Teaching Law
ERIC Educational Resources Information Center
Monaghan, Peter
2008-01-01
In 1871, Christopher Columbus Langdell, a prominent jurist who had joined the law faculty at Harvard University, hit on the idea of compiling thick, imposing "casebooks" with hundreds of appeals-court rulings on particular areas of law--contracts, constitutional law, torts, and others. Today, the hefty tomes and related works have become the…
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 28 2012-07-01 2012-07-01 false Tracking. 279.56 Section 279.56 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil Processors and Re-Refiners § 279.56 Tracking. (a...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 27 2014-07-01 2014-07-01 false Tracking. 279.56 Section 279.56 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil Processors and Re-Refiners § 279.56 Tracking. (a...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 28 2013-07-01 2013-07-01 false Tracking. 279.56 Section 279.56 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil Processors and Re-Refiners § 279.56 Tracking. (a...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 26 2010-07-01 2010-07-01 false Tracking. 279.56 Section 279.56 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil Processors and Re-Refiners § 279.56 Tracking. (a...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 27 2011-07-01 2011-07-01 false Tracking. 279.56 Section 279.56 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF USED OIL Standards for Used Oil Processors and Re-Refiners § 279.56 Tracking. (a...
Upgrades to the ISS Water Recovery System
NASA Technical Reports Server (NTRS)
Kayatin, Matthew; Takada, Kevin; Carter, Layne
2017-01-01
The ISS Water Recovery System (WRS) includes the Water Processor Assembly (WPA) and the Urine Processor Assembly (UPA). The WRS produces potable water from a combination of crew urine (first processed through the UPA), crew latent, and Sabatier product water. Though the WRS has performed well since operations began in November 2008, several modifications have been identified to improve the overall system performance. These modifications can reduce resupply and improve overall system reliability, which is beneficial for the ongoing ISS mission as well as for future NASA manned missions. The following paper details efforts to improve the WPA through the use of Reverse Osmosis technology to reduce the resupply mass of the WPA Multifiltration Bed and improved catalyst for the WPA Catalytic Reactor to reduce the operational temperature and pressure. For the UPA, this paper discusses progress on various concepts for improving the reliability of the UPA, including the implementation of a more reliable drive belt, improved methods for managing condensate in the stationary bowl of the Distillation Assembly, deleting the Separator Plumbing Assembly, and evaluating upgrades to the UPA vacuum pump.
Upgrades to the International Space Station Water Recovery System
NASA Technical Reports Server (NTRS)
Kayatin, Matthew J.; Pruitt, Jennifer M.; Nur, Mononita; Takada, Kevin C.; Carter, Layne
2017-01-01
The International Space Station (ISS) Water Recovery System (WRS) includes the Water Processor Assembly (WPA) and the Urine Processor Assembly (UPA). The WRS produces potable water from a combination of crew urine (first processed through the UPA), crew latent, and Sabatier product water. Though the WRS has performed well since operations began in November 2008, several modifications have been identified to improve the overall system performance. These modifications aim to reduce resupply and improve overall system reliability, which is beneficial for the ongoing ISS mission as well as for future NASA manned missions. The following paper details efforts to improve the WPA through the use of reverse osmosis membrane technology to reduce the resupply mass of the WPA Multi-filtration Bed and improved catalyst for the WPA Catalytic Reactor to reduce the operational temperature and pressure. For the UPA, this paper discusses progress on various concepts for improving the reliability of the system, including the implementation of a more reliable drive belt, improved methods for managing condensate in the stationary bowl of the Distillation Assembly, and evaluating upgrades to the UPA vacuum pump.
Design and Test of a 65nm CMOS Front-End with Zero Dead Time for Next Generation Pixel Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaioni, L.; Braga, D.; Christian, D.
This work is concerned with the experimental characterization of a synchronous analog processor with zero dead time developed in a 65 nm CMOS technology, conceived for pixel detectors at the HL-LHC experiment upgrades. It includes a low noise, fast charge sensitive amplifier with detector leakage compensation circuit, and a compact, single ended comparator able to correctly process hits belonging to two consecutive bunch crossing periods. A 2-bit Flash ADC is exploited for digital conversion immediately after the preamplifier. A description of the circuits integrated in the front-end processor and the initial characterization results are provided
Rasid, Mohd Fadlee A; Woodward, Bryan
2005-03-01
One of the emerging issues in m-Health is how best to exploit the mobile communications technologies that are now almost globally available. The challenge is to produce a system to transmit a patient's biomedical signals directly to a hospital for monitoring or diagnosis, using an unmodified mobile telephone. The paper focuses on the design of a processor, which samples signals from sensors on the patient. It then transmits digital data over a Bluetooth link to a mobile telephone that uses the General Packet Radio Service. The modular design adopted is intended to provide a "future-proofed" system, whose functionality may be upgraded by modifying the software.
Case Study of Using High Performance Commercial Processors in Space
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Olivas, Zulema
2009-01-01
The purpose of the Space Shuttle Cockpit Avionics Upgrade project (1999 2004) was to reduce crew workload and improve situational awareness. The upgrade was to augment the Shuttle avionics system with new hardware and software. A major success of this project was the validation of the hardware architecture and software design. This was significant because the project incorporated new technology and approaches for the development of human rated space software. An early version of this system was tested at the Johnson Space Center for one month by teams of astronauts. The results were positive, but NASA eventually cancelled the project towards the end of the development cycle. The goal to reduce crew workload and improve situational awareness resulted in the need for high performance Central Processing Units (CPUs). The choice of CPU selected was the PowerPC family, which is a reduced instruction set computer (RISC) known for its high performance. However, the requirement for radiation tolerance resulted in the re-evaluation of the selected family member of the PowerPC line. Radiation testing revealed that the original selected processor (PowerPC 7400) was too soft to meet mission objectives and an effort was established to perform trade studies and performance testing to determine a feasible candidate. At that time, the PowerPC RAD750s were radiation tolerant, but did not meet the required performance needs of the project. Thus, the final solution was to select the PowerPC 7455. This processor did not have a radiation tolerant version, but had some ability to detect failures. However, its cache tags did not provide parity and thus the project incorporated a software strategy to detect radiation failures. The strategy was to incorporate dual paths for software generating commands to the legacy Space Shuttle avionics to prevent failures due to the softness of the upgraded avionics.
Case Study of Using High Performance Commercial Processors in a Space Environment
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Olivas, Zulema
2009-01-01
The purpose of the Space Shuttle Cockpit Avionics Upgrade project was to reduce crew workload and improve situational awareness. The upgrade was to augment the Shuttle avionics system with new hardware and software. A major success of this project was the validation of the hardware architecture and software design. This was significant because the project incorporated new technology and approaches for the development of human rated space software. An early version of this system was tested at the Johnson Space Center for one month by teams of astronauts. The results were positive, but NASA eventually cancelled the project towards the end of the development cycle. The goal to reduce crew workload and improve situational awareness resulted in the need for high performance Central Processing Units (CPUs). The choice of CPU selected was the PowerPC family, which is a reduced instruction set computer (RISC) known for its high performance. However, the requirement for radiation tolerance resulted in the reevaluation of the selected family member of the PowerPC line. Radiation testing revealed that the original selected processor (PowerPC 7400) was too soft to meet mission objectives and an effort was established to perform trade studies and performance testing to determine a feasible candidate. At that time, the PowerPC RAD750s where radiation tolerant, but did not meet the required performance needs of the project. Thus, the final solution was to select the PowerPC 7455. This processor did not have a radiation tolerant version, but faired better than the 7400 in the ability to detect failures. However, its cache tags did not provide parity and thus the project incorporated a software strategy to detect radiation failures. The strategy was to incorporate dual paths for software generating commands to the legacy Space Shuttle avionics to prevent failures due to the softness of the upgraded avionics.
3D environment modeling and location tracking using off-the-shelf components
NASA Astrophysics Data System (ADS)
Luke, Robert H.
2016-05-01
The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.
Evaluation of GPUs as a level-1 track trigger for the High-Luminosity LHC
NASA Astrophysics Data System (ADS)
Mohr, H.; Dritschler, T.; Ardila, L. E.; Balzer, M.; Caselle, M.; Chilingaryan, S.; Kopmann, A.; Rota, L.; Schuh, T.; Vogelgesang, M.; Weber, M.
2017-04-01
In this work, we investigate the use of GPUs as a way of realizing a low-latency, high-throughput track trigger, using CMS as a showcase example. The CMS detector at the Large Hadron Collider (LHC) will undergo a major upgrade after the long shutdown from 2024 to 2026 when it will enter the high luminosity era. During this upgrade, the silicon tracker will have to be completely replaced. In the High Luminosity operation mode, luminosities of 5-7 × 1034 cm-2s-1 and pileups averaging at 140 events, with a maximum of up to 200 events, will be reached. These changes will require a major update of the triggering system. The demonstrated systems rely on dedicated hardware such as associative memory ASICs and FPGAs. We investigate the use of GPUs as an alternative way of realizing the requirements of the L1 track trigger. To this end we implemeted a Hough transformation track finding step on GPUs and established a low-latency RDMA connection using the PCIe bus. To showcase the benefits of floating point operations, made possible by the use of GPUs, we present a modified algorithm. It uses hexagonal bins for the parameter space and leads to a more truthful representation of the possible track parameters of the individual hits in Hough space. This leads to fewer duplicate candidates and reduces fake track candidates compared to the regular approach. With data-transfer latencies of 2 μs and processing times for the Hough transformation as low as 3.6 μs, we can show that latencies are not as critical as expected. However, computing throughput proves to be challenging due to hardware limitations.
Tests with beam setup of the TileCal phase-II upgrade electronics
NASA Astrophysics Data System (ADS)
Reward Hlaluku, Dingane
2017-09-01
The LHC has planned a series of upgrades culminating in the High Luminosity LHC which will have an average luminosity 5-7 times larger than the nominal Run-2 value. The ATLAS Tile calorimeter plans to introduce a new readout architecture by completely replacing the back-end and front-end electronics for the High Luminosity LHC. The photomultiplier signals will be fully digitized and transferred for every bunch crossing to the off-detector Tile PreProcessor. The Tile PreProcessor will further provide preprocessed digital data to the first level of trigger with improved spatial granularity and energy resolution in contrast to the current analog trigger signals. A single super-drawer module commissioned with the phase-II upgrade electronics is to be inserted into the real detector to evaluate and qualify the new readout and trigger concepts in the overall ATLAS data acquisition system. This new super-drawer, so-called hybrid Demonstrator, must provide analog trigger signals for backward compatibility with the current system. This Demonstrator drawer has been inserted into a Tile calorimeter module prototype to evaluate the performance in the lab. In parallel, one more module has been instrumented with two other front-end electronics options based on custom ASICs (QIE and FATALIC) which are under evaluation. These two modules together with three other modules composed of the current system electronics were exposed to different particles and energies in three test-beam campaigns during 2015 and 2016.
An artificial retina processor for track reconstruction at the LHC crossing rate
Bedeschi, F.; Cenci, R.; Marino, P.; ...
2017-11-23
The goal of the INFN-RETINA R&D project is to develop and implement a computational methodology that allows to reconstruct events with a large number (> 100) of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus matching the requirements for processing LHC events at the full bunch-crossing frequency. Our approach relies on a parallel pattern-recognition algorithm, dubbed artificial retina, inspired by the early stages of image processing by the brain. In order to demonstrate that a track-processing system based on this algorithm is feasible, we built a sizable prototype of a tracking processor tuned to 3 000more » patterns, based on already existing readout boards equipped with Altera Stratix III FPGAs. The detailed geometry and charged-particle activity of a large tracking detector currently in operation are used to assess its performances. Here, we report on the test results with such a prototype.« less
An artificial retina processor for track reconstruction at the LHC crossing rate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bedeschi, F.; Cenci, R.; Marino, P.
The goal of the INFN-RETINA R&D project is to develop and implement a computational methodology that allows to reconstruct events with a large number (> 100) of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus matching the requirements for processing LHC events at the full bunch-crossing frequency. Our approach relies on a parallel pattern-recognition algorithm, dubbed artificial retina, inspired by the early stages of image processing by the brain. In order to demonstrate that a track-processing system based on this algorithm is feasible, we built a sizable prototype of a tracking processor tuned to 3 000more » patterns, based on already existing readout boards equipped with Altera Stratix III FPGAs. The detailed geometry and charged-particle activity of a large tracking detector currently in operation are used to assess its performances. Here, we report on the test results with such a prototype.« less
Method and apparatus for optimizing a train trip using signal information
Kumar, Ajith Kuttannair; Daum, Wolfgang; Otsubo, Tom; Hershey, John Erik; Hess, Gerald James
2014-06-10
A system is provided for operating a railway network including a first railway vehicle during a trip along track segments. The system includes a first element for determining travel parameters of the first railway vehicle, a second element for determining travel parameters of a second railway vehicle relative to the track segments to be traversed by the first vehicle during the trip, a processor for receiving information from the first and the second elements and for determining a relationship between occupation of a track segment by the second vehicle and later occupation of the same track segment by the first vehicle and an algorithm embodied within the processor having access to the information to create a trip plan that determines a speed trajectory for the first vehicle. The speed trajectory is responsive to the relationship and further in accordance with one or more operational criteria for the first vehicle.
Real-time implementation of logo detection on open source BeagleBoard
NASA Astrophysics Data System (ADS)
George, M.; Kehtarnavaz, N.; Estevez, L.
2011-03-01
This paper presents the real-time implementation of our previously developed logo detection and tracking algorithm on the open source BeagleBoard mobile platform. This platform has an OMAP processor that incorporates an ARM Cortex processor. The algorithm combines Scale Invariant Feature Transform (SIFT) with k-means clustering, online color calibration and moment invariants to robustly detect and track logos in video. Various optimization steps that are carried out to allow the real-time execution of the algorithm on BeagleBoard are discussed. The results obtained are compared to the PC real-time implementation results.
Automated target recognition and tracking using an optical pattern recognition neural network
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin
1991-01-01
The on-going development of an automatic target recognition and tracking system at the Jet Propulsion Laboratory is presented. This system is an optical pattern recognition neural network (OPRNN) that is an integration of an innovative optical parallel processor and a feature extraction based neural net training algorithm. The parallel optical processor provides high speed and vast parallelism as well as full shift invariance. The neural network algorithm enables simultaneous discrimination of multiple noisy targets in spite of their scales, rotations, perspectives, and various deformations. This fully developed OPRNN system can be effectively utilized for the automated spacecraft recognition and tracking that will lead to success in the Automated Rendezvous and Capture (AR&C) of the unmanned Cargo Transfer Vehicle (CTV). One of the most powerful optical parallel processors for automatic target recognition is the multichannel correlator. With the inherent advantages of parallel processing capability and shift invariance, multiple objects can be simultaneously recognized and tracked using this multichannel correlator. This target tracking capability can be greatly enhanced by utilizing a powerful feature extraction based neural network training algorithm such as the neocognitron. The OPRNN, currently under investigation at JPL, is constructed with an optical multichannel correlator where holographic filters have been prepared using the neocognitron training algorithm. The computation speed of the neocognitron-type OPRNN is up to 10(exp 14) analog connections/sec that enabling the OPRNN to outperform its state-of-the-art electronics counterpart by at least two orders of magnitude.
An optical processor for object recognition and tracking
NASA Technical Reports Server (NTRS)
Sloan, J.; Udomkesmalee, S.
1987-01-01
The design and development of a miniaturized optical processor that performs real time image correlation are described. The optical correlator utilizes the Vander Lugt matched spatial filter technique. The correlation output, a focused beam of light, is imaged onto a CMOS photodetector array. In addition to performing target recognition, the device also tracks the target. The hardware, composed of optical and electro-optical components, occupies only 590 cu cm of volume. A complete correlator system would also include an input imaging lens. This optical processing system is compact, rugged, requires only 3.5 watts of operating power, and weighs less than 3 kg. It represents a major achievement in miniaturizing optical processors. When considered as a special-purpose processing unit, it is an attractive alternative to conventional digital image recognition processing. It is conceivable that the combined technology of both optical and ditital processing could result in a very advanced robot vision system.
The new Inner Tracking System of the ALICE experiment
NASA Astrophysics Data System (ADS)
Martinengo, P.; Alice Collaboration
2017-11-01
The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019-20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity expected during Run 3 and Run 4. The replacement of the existing Inner Tracking System with a completely new ultra-light, high-resolution detector is one of the cornerstones within this upgrade program. The main motivation of the ITS upgrade is to provide ALICE with an improved tracking capability and impact parameter resolution at very low transverse momentum, as well as to enable a substantial increase of the readout rate. The new ITS will consist of seven layers of innovative Monolithic Active Pixel Sensors with the innermost layer sitting at only 23 mm from the interaction point. This talk will focus on the design and the physics performance of the new ITS, as well as the technology choices adopted. The status of the project and the results from the prototypes characterization will also be presented.
Distributed micro-radar system for detection and tracking of low-profile, low-altitude targets
NASA Astrophysics Data System (ADS)
Gorwara, Ashok; Molchanov, Pavlo
2016-05-01
Proposed airborne surveillance radar system can detect, locate, track, and classify low-profile, low-altitude targets: from traditional fixed and rotary wing aircraft to non-traditional targets like unmanned aircraft systems (drones) and even small projectiles. Distributed micro-radar system is the next step in the development of passive monopulse direction finder proposed by Stephen E. Lipsky in the 80s. To extend high frequency limit and provide high sensitivity over the broadband of frequencies, multiple angularly spaced directional antennas are coupled with front end circuits and separately connected to a direction finder processor by a digital interface. Integration of antennas with front end circuits allows to exclude waveguide lines which limits system bandwidth and creates frequency dependent phase errors. Digitizing of received signals proximate to antennas allows loose distribution of antennas and dramatically decrease phase errors connected with waveguides. Accuracy of direction finding in proposed micro-radar in this case will be determined by time accuracy of digital processor and sampling frequency. Multi-band, multi-functional antennas can be distributed around the perimeter of a Unmanned Aircraft System (UAS) and connected to the processor by digital interface or can be distributed between swarm/formation of mini/micro UAS and connected wirelessly. Expendable micro-radars can be distributed by perimeter of defense object and create multi-static radar network. Low-profile, lowaltitude, high speed targets, like small projectiles, create a Doppler shift in a narrow frequency band. This signal can be effectively filtrated and detected with high probability. Proposed micro-radar can work in passive, monostatic or bistatic regime.
2010-09-01
53 Figure 26. Image of the phased array antenna...................................................................54...69 Figure 38. Computation of correction angle from array factor and sum/difference beams...71 Figure 39. Front panel of the tracking algorithm
77 FR 64374 - Notification of Petition for Approval; Port Authority Trans-Hudson Product Safety Plan
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-19
... assigned the petition Docket Number FRA-2012-0075. PATH is upgrading some of its track circuits with Digicode microprocessor-based track circuits. The Digicode track circuit is part of Alstom's Smartway Digital Track Circuit product line and will be used by PATH for train detection and broken rail detection...
APL - North Pacific Acoustic Laboratory
2015-03-04
PhilSea10 cruise were spent in a series of upgrades and associated tests of the TCTD system. Electronic upgrades in conjunction with the manufacturers, ADM...amplitude at positions sufficiently removed from caustics . Mr. White computed eigenrays to all tracked upper array hydrophone positions relative to each
NASA Astrophysics Data System (ADS)
Nellist, C.; Dinu, N.; Gkougkousis, E.; Lounis, A.
2015-06-01
The LHC accelerator complex will be upgraded between 2020-2022, to the High-Luminosity-LHC, to considerably increase statistics for the various physics analyses. To operate under these challenging new conditions, and maintain excellent performance in track reconstruction and vertex location, the ATLAS pixel detector must be substantially upgraded and a full replacement is expected. Processing techniques for novel pixel designs are optimised through characterisation of test structures in a clean room and also through simulations with Technology Computer Aided Design (TCAD). A method to study non-perpendicular tracks through a pixel device is discussed. Comparison of TCAD simulations with Secondary Ion Mass Spectrometry (SIMS) measurements to investigate the doping profile of structures and validate the simulation process is also presented.
A wideband software reconfigurable modem
NASA Astrophysics Data System (ADS)
Turner, J. H., Jr.; Vickers, H.
A wideband modem is described which provides signal processing capability for four Lx-band signals employing QPSK, MSK and PPM waveforms and employs a software reconfigurable architecture for maximum system flexibility and graceful degradation. The current processor uses a 2901 and two 8086 microprocessors per channel and performs acquisition, tracking, and data demodulation for JITDS, GPS, IFF and TACAN systems. The next generation processor will be implemented using a VHSIC chip set employing a programmable complex array vector processor module, a GP computer module, customized gate array modules, and a digital array correlator. This integrated processor has application to a wide number of diverse system waveforms, and will bring the benefits of VHSIC technology insertion into avionic antijam communications systems.
2011-12-29
ISS030-E-017789 (29 Dec. 2011) --- Working in chorus with the International Space Station team in Houston?s Mission Control Center, this astronaut and his Expedition 30 crewmates on the station install a set of Enhanced Processor and Integrated Communications (EPIC) computer cards in one of seven primary computers onboard. The upgrade will allow more experiments to operate simultaneously, and prepare for the arrival of commercial cargo ships later this year.
2011-12-29
ISS030-E-017776 (29 Dec. 2011) --- Working in chorus with the International Space Station team in Houston?s Mission Control Center, this astronaut and his Expedition 30 crewmates on the station install a set of Enhanced Processor and Integrated Communications (EPIC) computer cards in one of seven primary computers onboard. The upgrade will allow more experiments to operate simultaneously, and prepare for the arrival of commercial cargo ships later this year.
FPGA-based Upgrade to RITS-6 Control System, Designed with EMP Considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harold D. Anderson, John T. Williams
2009-07-01
The existing control system for the RITS-6, a 20-MA 3-MV pulsed-power accelerator located at Sandia National Laboratories, was built as a system of analog switches because the operators needed to be close enough to the machine to hear pulsed-power breakdowns, yet the electromagnetic pulse (EMP) emitted would disable any processor-based solutions. The resulting system requires operators to activate and deactivate a series of 110-V relays manually in a complex order. The machine is sensitive to both the order of operation and the time taken between steps. A mistake in either case would cause a misfire and possible machine damage. Basedmore » on these constraints, a field-programmable gate array (FPGA) was chosen as the core of a proposed upgrade to the control system. An FPGA is a series of logic elements connected during programming. Based on their connections, the elements can mimic primitive logic elements, a process called synthesis. The circuit is static; all paths exist simultaneously and do not depend on a processor. This should make it less sensitive to EMP. By shielding it and using good electromagnetic interference-reduction practices, it should continue to operate well in the electrically noisy environment. The FPGA has two advantages over the existing system. In manual operation mode, the synthesized logic gates keep the operators in sequence. In addition, a clock signal and synthesized countdown circuit provides an automated sequence, with adjustable delays, for quickly executing the time-critical portions of charging and firing. The FPGA is modeled as a set of states, each state being a unique set of values for the output signals. The state is determined by the input signals, and in the automated segment by the value of the synthesized countdown timer, with the default mode placing the system in a safe configuration. Unlike a processor-based system, any system stimulus that results in an abort situation immediately executes a shutdown, with only a tens-of-nanoseconds delay to propagate across the FPGA. This paper discusses the design, installation, and testing of the proposed system upgrade, including failure statistics and modifications to the original design.« less
Upgrades to the ISS Water Recovery System
NASA Technical Reports Server (NTRS)
Pruitt, Jennifer M.; Carter, Layne; Bagdigian, Robert M.; Kayatin, Mattthew J.
2015-01-01
The ISS Water Recovery System (WRS) includes the Water Processor Assembly (WPA) and the Urine Processor Assembly (UPA). The WRS produces potable water from a combination of crew urine (first processed through the UPA), crew latent, and Sabatier product water. The WRS has been operational on ISS since November 2008, producing over 21,000 L of potable water during that time. Though the WRS has performed well during this time, several modifications have been identified to improve the overall system performance. These modifications can reduce resupply and improve overall system reliability, which is beneficial for the ongoing ISS mission as well as for future NASA manned missions. The following paper lists these modifications, how they improve WRS performance, and a status on the ongoing development effort.
Broadband set-top box using MAP-CA processor
NASA Astrophysics Data System (ADS)
Bush, John E.; Lee, Woobin; Basoglu, Chris
2001-12-01
Advances in broadband access are expected to exert a profound impact in our everyday life. It will be the key to the digital convergence of communication, computer and consumer equipment. A common thread that facilitates this convergence comprises digital media and Internet. To address this market, Equator Technologies, Inc., is developing the Dolphin broadband set-top box reference platform using its MAP-CA Broadband Signal ProcessorT chip. The Dolphin reference platform is a universal media platform for display and presentation of digital contents on end-user entertainment systems. The objective of the Dolphin reference platform is to provide a complete set-top box system based on the MAP-CA processor. It includes all the necessary hardware and software components for the emerging broadcast and the broadband digital media market based on IP protocols. Such reference design requires a broadband Internet access and high-performance digital signal processing. By using the MAP-CA processor, the Dolphin reference platform is completely programmable, allowing various codecs to be implemented in software, such as MPEG-2, MPEG-4, H.263 and proprietary codecs. The software implementation also enables field upgrades to keep pace with evolving technology and industry demands.
The AMchip04 and the processing unit prototype for the FastTracker
NASA Astrophysics Data System (ADS)
Andreani, A.; Annovi, A.; Beretta, M.; Bogdan, M.; Citterio, M.; Alberti, F.; Giannetti, P.; Lanza, A.; Magalotti, D.; Piendibene, M.; Shochet, M.; Stabile, A.; Tang, J.; Tompkins, L.; Volpi, G.
2012-08-01
Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment`s complexity, the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive event selection. We present the first prototype of a new Processing Unit (PU), the core of the FastTracker processor (FTK). FTK is a real time tracking device for the ATLAS experiment`s trigger upgrade. The computing power of the PU is such that a few hundred of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV/c in ATLAS events up to Phase II instantaneous luminosities (3 × 1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below a hundred microseconds. The PU provides massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generally referred to as the ``combinatorial challenge'', is solved by the Associative Memory (AM) technology exploiting parallelism to the maximum extent; it compares the event to all pre-calculated ``expectations'' or ``patterns'' (pattern matching) simultaneously, looking for candidate tracks called ``roads''. This approach reduces to a linear behavior the typical exponential complexity of the CPU based algorithms. Pattern recognition is completed by the time data are loaded into the AM devices. We report on the design of the first Processing Unit prototypes. The design had to address the most challenging aspects of this technology: a huge number of detector clusters (``hits'') must be distributed at high rate with very large fan-out to all patterns (10 Million patterns will be located on 128 chips placed on a single board) and a huge number of roads must be collected and sent back to the FTK post-pattern-recognition functions. A network of high speed serial links is used to solve the data distribution problem.
LANDSAT-4 MSS Geometric Correction: Methods and Results
NASA Technical Reports Server (NTRS)
Brooks, J.; Kimmer, E.; Su, J.
1984-01-01
An automated image registration system such as that developed for LANDSAT-4 can produce all of the information needed to verify and calibrate the software and to evaluate system performance. The on-line MSS archive generation process which upgrades systematic correction data to geodetic correction data is described as well as the control point library build subsystem which generates control point chips and support data for on-line upgrade of correction data. The system performance was evaluated for both temporal and geodetic registration. For temporal registration, 90% errors were computed to be .36 IFOV (instantaneous field of view) = 82.7 meters) cross track, and .29 IFOV along track. Also, for actual production runs monitored, the 90% errors were .29 IFOV cross track and .25 IFOV along track. The system specification is .3 IFOV, 90% of the time, both cross and along track. For geodetic registration performance, the model bias was measured by designating control points in the geodetically corrected imagery.
The Integration of DCS I/O to an Existing PLC
NASA Technical Reports Server (NTRS)
Sadhukhan, Debashis; Mihevic, John
2013-01-01
At the NASA Glenn Research Center (GRC), Existing Programmable Logic Controller (PLC) I/O was replaced with Distributed Control System (DCS) I/O, while keeping the existing PLC sequence Logic. The reason for integration of the PLC logic and DCS I/O, along with the evaluation of the resulting system is the subject of this paper. The pros and cons of the old system and new upgrade are described, including operator workstation screen update times. Detail of the physical layout and the communication between the PLC, the DCS I/O and the operator workstations are illustrated. The complex characteristics of a central process control system and the plan to remove the PLC processors in future upgrades is also discussed.
Digital Phase-Locked Loop With Phase And Frequency Feedback
NASA Technical Reports Server (NTRS)
Thomas, J. Brooks
1991-01-01
Advanced design for digital phase-lock loop (DPLL) allows loop gains higher than those used in other designs. Divided into two major components: counterrotation processor and tracking processor. Notable features include use of both phase and rate-of-change-of-phase feedback instead of frequency feedback alone, normalized sine phase extractor, improved method for extracting measured phase, and improved method for "compressing" output rate.
AM06: the Associative Memory chip for the Fast TracKer in the upgraded ATLAS detector
NASA Astrophysics Data System (ADS)
Annovi, A.; Beretta, M. M.; Calderini, G.; Crescioli, F.; Frontini, L.; Liberali, V.; Shojaii, S. R.; Stabile, A.
2017-04-01
This paper describes the AM06 chip, which is a highly parallel processor for pattern recognition in the ATLAS high energy physics experiment. The AM06 contains memory banks that store data organized in 18 bit words; a group of 8 words is called "pattern". Each AM06 chip can store up to 131 072 patterns. The AM06 is a large chip, designed in 65 nm CMOS, and it combines full-custom memory arrays, standard logic cells and serializer/deserializer IP blocks at 2 Gbit/s for input/output communication. The overall silicon area is 168 mm2 and the chip contains about 421 million transistors. The AM06 receives the detector data for each event accepted by Level-1 trigger, up to 100 kHz, and it performs a track reconstruction based on hit information from channels of the ATLAS silicon detectors. Thanks to the design of a new associative memory cell and to the layout optimization, the AM06 consumption is only about 1 fJ/bit per comparison. The AM06 has been fabricated and successfully tested with a dedicated test system.
TELAER: a multi-mode/multi-antenna interferometric airborne SAR system
NASA Astrophysics Data System (ADS)
Perna, Stefano; Amaral, Tiago; Berardino, Paolo; Esposito, Carmen; Jackson, Giuseppe; Pauciullo, Antonio; Vaz Junior, Eurico; Wimmer, Christian; Lanari, Riccardo
2014-05-01
The present contribution is aimed at showing the capabilities of the TELAER airborne Synthetic Aperture Radar (SAR) system recently upgraded to the interferometric mode [1]. TELAER is an Italian airborne X-Band SAR system, mounted onboard a LearJet 35A aircraft. Originally equipped with a single TX/RX antenna, it now operates in single-pass interferometric mode thanks to a system upgrading [1] funded by the Italian National Research Council (CNR), via the Italian Ministry of Education, Universities and Research (MIUR), in the framework of a cooperation between CNR and the Italian Agency for Agriculture Subsidy Payments (AGEA). In the frame of such cooperation, CNR has entrusted the Institute for Electromagnetic Sensing of the Environment (IREA) for managing all the activities, included the final flight tests, related to the system upgrading. According to such an upgrading, two additional receiving X-band antennas have been installed in order to allow, simultaneously, single-pass Across-Track and Along-Track interferometry [1]. More specifically, the three antennas are now installed in such a way to produce three different across-track baselines and two different along-track baselines. Moreover, in the frame of the same system upgrading, it has been mounted onboard the Learjet an accurate embedded Global Navigation Satellite System and Inertial Measurement Unit equipment. This allows precise measurement of the tracks described by the SAR antennas during the flight, in order to accurately implement Motion Compensation (MOCO) algorithms [2] during the image formation (focusing) step. It is worth remarking that the TELAER system upgraded to the interferometric mode is very flexible, since the user can set different operational modes characterized by different geometric resolutions and range swaths. In particular, it is possible to reach up to 0.5 m of resolution with a range swath of 2km; conversely, it is possible to enlarge the range swath up to 10 km at expenses of a degradation of the geometric resolution, which in this case becomes equal to 5m. Such an operational flexibility, added to the above discussed single-pass interferometric capability and to the intrinsic flexibility of airborne platforms, renders the TELAER airborne SAR system a powerful instrument for fast generation of high resolution Digital Elevation Models, even in natural disaster scenarios. Accordingly, this system can play today a key role not only for strictly scientific purposes, but also for the monitoring of natural hazards, especially if properly integrated with other remote sensing sensors. [1] S. Perna et al., "Capabilities of the TELAER airborne SAR system upgraded to the multi-antenna mode", In Proceedings IGARSS 2012 Symposium, Munich, 2012. [2] G. Franceschetti, and R.Lanari, Synthetic Aperture Radar Processing, CRC PRESS, New York, 1999.
[Activities of Bay Area Research Corporation
NASA Technical Reports Server (NTRS)
2003-01-01
During the final year of this effort the HALFSHEL code was converted to work on a fast single processor workstation from it s parallel configuration. This was done because NASA Ames NAS facility stopped supporting space science and we no longer had access to parallel computer time. The single processor version of HALFSHEL was upgraded to address low density cells by using a a 3-D SOR solver to solve the equation Delta central dot E = 0. We then upgraded the ionospheric load packages to provide a multiple species load of the ionosphere out to 1.4 Rm. With these new tools we began to perform a series of simulations to address the major topic of this research effort; determining the loss rate of O(sup +) and O2(sup +) from Mars. The simulations used the nominal Parker spiral field and in one case used a field perpendicular to the solar wind flow. The simulations were performed for three different solar EUV fluxes consistent with the different solar evolutionary states believed to exist before today. The 1 EUV case is the nominal flux of today. The 3 EUV flux is called Epoch 2 and has three times the flux of todays. The 6 EUV case is Epoch 3 and has 6 times the EUV flux of today.
NASA Technical Reports Server (NTRS)
Muller, Dagmar; Krasemann, Hajo; Brewin, Robert J. W.; Brockmann, Carsten; Deschamps, Pierre-Yves; Fomferra, Norman; Franz, Bryan A.; Grant, Mike G.; Groom, Steve B.; Melin, Frederic;
2015-01-01
The established procedure to access the quality of atmospheric correction processors and their underlying algorithms is the comparison of satellite data products with related in-situ measurements. Although this approach addresses the accuracy of derived geophysical properties in a straight forward fashion, it is also limited in its ability to catch systematic sensor and processor dependent behaviour of satellite products along the scan-line, which might impair the usefulness of the data in spatial analyses. The Ocean Colour Climate Change Initiative (OC-CCI) aims to create an ocean colour dataset on a global scale to meet the demands of the ecosystem modelling community. The need for products with increasing spatial and temporal resolution that also show as little systematic and random errors as possible, increases. Due to cloud cover, even temporal means can be influenced by along-scanline artefacts if the observations are not balanced and effects cannot be cancelled out mutually. These effects can arise from a multitude of results which are not easily separated, if at all. Among the sources of artefacts, there are some sensor-specific calibration issues which should lead to similar responses in all processors, as well as processor-specific features which correspond with the individual choices in the algorithms. A set of methods is proposed and applied to MERIS data over two regions of interest in the North Atlantic and the South Pacific Gyre. The normalised water leaving reflectance products of four atmospheric correction processors, which have also been evaluated in match-up analysis, is analysed in order to find and interpret systematic effects across track. These results are summed up with a semi-objective ranking and are used as a complement to the match-up analysis in the decision for the best Atmospheric Correction (AC) processor. Although the need for discussion remains concerning the absolutes by which to judge an AC processor, this example demonstrates clearly, that relying on the match-up analysis alone can lead to misjudgement.
Data acquisition and processing in the ATLAS tile calorimeter phase-II upgrade demonstrator
NASA Astrophysics Data System (ADS)
Valero, A.; Tile Calorimeter System, ATLAS
2017-10-01
The LHC has planned a series of upgrades culminating in the High Luminosity LHC which will have an average luminosity 5-7 times larger than the nominal Run 2 value. The ATLAS Tile Calorimeter will undergo an upgrade to accommodate the HL-LHC parameters. The TileCal readout electronics will be redesigned, introducing a new readout strategy. A Demonstrator program has been developed to evaluate the new proposed readout architecture and prototypes of all the components. In the Demonstrator, the detector data received in the Tile PreProcessors (PPr) are stored in pipeline buffers and upon the reception of an external trigger signal the data events are processed, packed and readout in parallel through the legacy ROD system, the new Front-End Link eXchange system and an ethernet connection for monitoring purposes. This contribution describes in detail the data processing and the hardware, firmware and software components of the TileCal Demonstrator readout system.
Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning
NASA Technical Reports Server (NTRS)
Das, Raja; Ponnusamy, Ravi; Saltz, Joel; Mavriplis, Dimitri
1991-01-01
Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods.
Adaptation in pronoun resolution: Evidence from Brazilian and European Portuguese.
Fernandes, Eunice G; Luegi, Paula; Correa Soares, Eduardo; de la Fuente, Israel; Hemforth, Barbara
2018-04-26
Previous research accounting for pronoun resolution as a problem of probabilistic inference has not explored the phenomenon of adaptation, whereby the processor constantly tracks and adapts, rationally, to changes in a statistical environment. We investigate whether Brazilian (BP) and European Portuguese (EP) speakers adapt to variations in the probability of occurrence of ambiguous overt and null pronouns, in two experiments assessing resolution toward subject and object referents. For each variety (BP, EP), participants were faced with either the same number of null and overt pronouns (equal distribution), or with an environment with fewer overt (than null) pronouns (unequal distribution). We find that the preference for interpreting overt pronouns as referring back to an object referent (object-biased interpretation) is higher when there are fewer overt pronouns (i.e., in the unequal, relative to the equal distribution condition). This is especially the case for BP, a variety with higher prior frequency and smaller object-biased interpretation of overt pronouns, suggesting that participants adapted incrementally and integrated prior statistical knowledge with the knowledge obtained in the experiment. We hypothesize that comprehenders adapted rationally, with the goal of maintaining, across variations in pronoun probability, the likelihood of subject and object referents. Our findings unify insights from research in pronoun resolution and in adaptation, and add to previous studies in both topics: They provide evidence for the influence of pronoun probability in pronoun resolution, and for an adaptation process whereby the language processor not only tracks statistical information, but uses it to make interpretational inferences. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
CO2 evaporative cooling: The future for tracking detector thermal management
NASA Astrophysics Data System (ADS)
Tropea, P.; Daguin, J.; Petagna, P.; Postema, H.; Verlaat, B.; Zwalinski, L.
2016-07-01
In the last few years, CO2 evaporative cooling has been one of the favourite technologies chosen for the thermal management of tracking detectors at LHC. ATLAS Insertable B-Layer and CMS Pixel phase 1 upgrade have adopted it and their systems are now operational or under commissioning. The CERN PH-DT team is now merging the lessons learnt on these two systems in order to prepare the design and construction of the cooling systems for the new Upstream Tracker and the Velo upgrade in LHCb, due by 2018. Meanwhile, the preliminary design of the ATLAS and CMS full tracker upgrades is started, and both concepts heavily rely on CO2 evaporative cooling. This paper highlights the performances of the systems now in operation and the challenges to overcome in order to scale them up to the requirements of the future generations of trackers. In particular, it focuses on the conceptual design of a new cooling system suited for the large phase 2 upgrade programmes, which will be validated with the construction of a common prototype in the next years.
Digital signal processor and processing method for GPS receivers
NASA Technical Reports Server (NTRS)
Thomas, Jr., Jess B. (Inventor)
1989-01-01
A digital signal processor and processing method therefor for use in receivers of the NAVSTAR/GLOBAL POSITIONING SYSTEM (GPS) employs a digital carrier down-converter, digital code correlator and digital tracking processor. The digital carrier down-converter and code correlator consists of an all-digital, minimum bit implementation that utilizes digital chip and phase advancers, providing exceptional control and accuracy in feedback phase and in feedback delay. Roundoff and commensurability errors can be reduced to extremely small values (e.g., less than 100 nanochips and 100 nanocycles roundoff errors and 0.1 millichip and 1 millicycle commensurability errors). The digital tracking processor bases the fast feedback for phase and for group delay in the C/A, P.sub.1, and P.sub.2 channels on the L.sub.1 C/A carrier phase thereby maintaining lock at lower signal-to-noise ratios, reducing errors in feedback delays, reducing the frequency of cycle slips and in some cases obviating the need for quadrature processing in the P channels. Simple and reliable methods are employed for data bit synchronization, data bit removal and cycle counting. Improved precision in averaged output delay values is provided by carrier-aided data-compression techniques. The signal processor employs purely digital operations in the sense that exactly the same carrier phase and group delay measurements are obtained, to the last decimal place, every time the same sampled data (i.e., exactly the same bits) are processed.
A bunch to bucket phase detector for the RHIC LLRF upgrade platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, K.S.; Harvey, M.; Hayes, T.
2011-03-28
As part of the overall development effort for the RHIC LLRF Upgrade Platform [1,2,3], a generic four channel 16 bit Analog-to-Digital Converter (ADC) daughter module was developed to provide high speed, wide dynamic range digitizing and processing of signals from DC to several hundred megahertz. The first operational use of this card was to implement the bunch to bucket phase detector for the RHIC LLRF beam control feedback loops. This paper will describe the design and performance features of this daughter module as a bunch to bucket phase detector, and also provide an overview of its place within the overallmore » LLRF platform architecture as a high performance digitizer and signal processing module suitable to a variety of applications. In modern digital control and signal processing systems, ADCs provide the interface between the analog and digital signal domains. Once digitized, signals are then typically processed using algorithms implemented in field programmable gate array (FPGA) logic, general purpose processors (GPPs), digital signal processors (DSPs) or a combination of these. For the recently developed and commissioned RHIC LLRF Upgrade Platform, we've developed a four channel ADC daughter module based on the Linear Technology LTC2209 16 bit, 160 MSPS ADC and the Xilinx V5FX70T FPGA. The module is designed to be relatively generic in application, and with minimal analog filtering on board, is capable of processing signals from DC to 500 MHz or more. The module's first application was to implement the bunch to bucket phase detector (BTB-PD) for the RHIC LLRF system. The same module also provides DC digitizing of analog processed BPM signals used by the LLRF system for radial feedback.« less
Upgrades to the ISS Water Recovery System
NASA Technical Reports Server (NTRS)
Kayatin, Matthew J.; Carter, Donald L.; Schunk, Richard G.; Pruitt, Jennifer M.
2016-01-01
The International Space Station Water Recovery System (WRS) is comprised of the Water Processor Assembly (WPA) and the Urine Processor Assembly (UPA). The WRS produces potable water from a combination of crew urine (first processed through the UPA), crew latent, and Sabatier product water. Though the WRS has performed well since operations began in November 2008, several modifications have been identified to improve the overall system performance. These modifications can reduce resupply and improve overall system reliability, which is beneficial for the ongoing ISS mission as well as for future NASA manned missions. The following paper details efforts to reduce the resupply mass of the WPA Multifiltration Bed, develop improved catalyst for the WPA Catalytic Reactor, evaluate optimum operation of UPA through parametric testing, and improve reliability of the UPA fluids pump and Distillation Assembly.
A complexity-scalable software-based MPEG-2 video encoder.
Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin
2004-05-01
With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.
Digital Flight Control System Redundancy Study
1974-07-01
has its own separate power supr, y . d. Digital Processor The digital processor consists of the followdnq components: (1) Program Counter - This...1-3 Yaw Axis Control 108 1-4 Autothrottle (Airspeed Hold Mode) 109 1-5 Approach Power Compensation 110 1-6 Glideslope Flare 111 I-7 Glideslope Track...considsred to the extent that they imposed constraints on the candidate con- figurations. Cost, size, weight, power , maintainability, survivability and
Method and apparatus for optimizing a train trip using signal information
Kumar, Ajith Kuttannair; Daum, Wolfgang; Otsubo, Tom; Hershey, John Erik; Hess, Gerald James
2013-02-05
One embodiment of the invention includes a system for operating a railway network comprising a first railway vehicle (400) during a trip along track segments (401/412/420). The system comprises a first element (65) for determining travel parameters of the first railway vehicle (400), a second element (65) for determining travel parameters of a second railway vehicle (418) relative to the track segments to be traversed by the first vehicle during the trip, a processor (62) for receiving information from the first (65) and the second (65) elements and for determining a relationship between occupation of a track segment (401/412/420) by the second vehicle (418) and later occupation of the same track segment by the first vehicle (400) and an algorithm embodied within the processor (62) having access to the information to create a trip plan that determines a speed trajectory for the first vehicle (400), wherein the speed trajectory is responsive to the relationship and further in accordance with one or more operational criteria for the first vehicle (400).
Some issues related to simulation of the tracking and communications computer network
NASA Technical Reports Server (NTRS)
Lacovara, Robert C.
1989-01-01
The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.
Some issues related to simulation of the tracking and communications computer network
NASA Astrophysics Data System (ADS)
Lacovara, Robert C.
1989-12-01
The Communications Performance and Integration branch of the Tracking and Communications Division has an ongoing involvement in the simulation of its flight hardware for Space Station Freedom. Specifically, the communication process between central processor(s) and orbital replaceable units (ORU's) is simulated with varying degrees of fidelity. The results of investigations into three aspects of this simulation effort are given. The most general area involves the use of computer assisted software engineering (CASE) tools for this particular simulation. The second area of interest is simulation methods for systems of mixed hardware and software. The final area investigated is the application of simulation methods to one of the proposed computer network protocols for space station, specifically IEEE 802.4.
Documentary table-top view of a comparison of the General Purpose Computers.
1988-09-13
S88-47513 (Aug 1988) --- The current and future versions of general purpose computers for Space Shuttle orbiters are represented in this frame. The two boxes on the left (AP101B) represent the current GPC configuration, with the input-output processor at far left and the central processing unit at its side. The upgraded version combines both elements in a single unit (far right, AP101S).
Pettit performs the EPIC Card Testing and X2R10 Software Transition
2011-12-28
ISS030-E-022574 (28 Dec. 2011) -- NASA astronaut Don Pettit (foreground),Expedition 30 flight engineer, performs the Enhanced Processor and Integrated Communications (EPIC) card testing and X2R10 software transition. The software transition work will include EPIC card testing and card installations, and monitoring of the upgraded Multiplexer/ Demultiplexer (MDM) computers. Dan Burbank, Expedition 30 commander, is setting up a camcorder in the background.
Pettit performs the EPIC Card Testing and X2R10 Software Transition
2011-12-28
ISS030-E-022575 (28 Dec. 2011) -- NASA astronaut Don Pettit (foreground),Expedition 30 flight engineer, performs the Enhanced Processor and Integrated Communications (EPIC) card testing and X2R10 software transition. The software transition work will include EPIC card testing and card installations, and monitoring of the upgraded Multiplexer/ Demultiplexer (MDM) computers. Dan Burbank, Expedition 30 commander, is setting up a camcorder in the background.
Using Modern Design Tools for Digital Avionics Development
NASA Technical Reports Server (NTRS)
Hyde, David W.; Lakin, David R., II; Asquith, Thomas E.
2000-01-01
Using Modem Design Tools for Digital Avionics Development Shrinking development time and increased complexity of new avionics forces the designer to use modem tools and methods during hardware development. Engineers at the Marshall Space Flight Center have successfully upgraded their design flow and used it to develop a Mongoose V based radiation tolerant processor board for the International Space Station's Water Recovery System. The design flow, based on hardware description languages, simulation, synthesis, hardware models, and full functional software model libraries, allowed designers to fully simulate the processor board from reset, through initialization before any boards were built. The fidelity of a digital simulation is limited to the accuracy of the models used and how realistically the designer drives the circuit's inputs during simulation. By using the actual silicon during simulation, device modeling errors are reduced. Numerous design flaws were discovered early in the design phase when they could be easily fixed. The use of hardware models and actual MIPS software loaded into full functional memory models also provided checkout of the software development environment. This paper will describe the design flow used to develop the processor board and give examples of errors that were found using the tools. An overview of the processor board firmware will also be covered.
14- by 22-Foot Subsonic Tunnel Laser Velocimeter Upgrade
NASA Technical Reports Server (NTRS)
Meyers, James F.; Lee, Joseph W.; Cavone, Angelo A.; Fletcher, Mark T.
2012-01-01
A long-focal length laser velocimeter constructed in the early 1980's was upgraded using current technology to improve usability, reliability and future serviceability. The original, free-space optics were replaced with a state-of-the-art fiber-optic subsystem which allowed most of the optics, including the laser, to be remote from the harsh tunnel environment. General purpose high-speed digitizers were incorporated in a standard modular data acquisition system, along with custom signal processing software executed on a desktop computer, served as the replacement for the signal processors. The resulting system increased optical sensitivity with real-time signal/data processing that produced measurement precisions exceeding those of the original system. Monte Carlo simulations, along with laboratory and wind tunnel investigations were used to determine system characteristics and measurement precision.
NASA Astrophysics Data System (ADS)
Dunagan, S. E.; Flynn, C. J.; Johnson, R. R.; Kacenelenbogen, M. S.; Knobelspiesse, K. D.; LeBlanc, S. E.; Livingston, J. M.; Redemann, J.; Russell, P. B.; Schmid, B.; Segal-Rosenhaimer, M.; Shinozuka, Y.
2014-12-01
The Spectrometers for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) instrument has been developed at NASA Ames in collaboration with Pacific Northwest National Laboratory (PNNL) and NASA Goddard, supported substantially since 2009 by NASA's Radiation Science Program and Earth Science Technology Office. It combines grating spectrometers with fiber optic links to a tracking, scanning head to enable sun tracking, sky scanning, and zenith viewing. 4STAR builds on the long and productive heritage of the NASA Ames Airborne Tracking Sunphotometers (AATS-6 and -14), which have yielded more than 100 peer-reviewed publications and extensive archived data sets in many NASA Airborne Science campaigns from 1986 to the present. The baseline 4STAR instrument has provided extensive data supporting the TCAP (Two Column Aerosol Project, July 2012 & Feb. 2013), SEAC4RS (Studies of Emissions, Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys, 2013), and ARISE (Arctic Radiation - IceBridge Sea and Ice Experiment, 2014), field campaigns.This poster presents plans and progress for an upgrade to the 4STAR instrument to achieve full science capability, including (1) direct-beam sun tracking measurements to derive aerosol optical depth spectra, (2) sky radiance measurements to retrieve aerosol absorption and type (via complex refractive index and mode-resolved size distribution), (3) cloud properties via zenith radiance, and (4) trace gas spectrometry. Technical progress in context with the governing physics is reported on several upgrades directed at improved light collection and usage, particularly as related to spectrally and radiometrically stable propagation through the collection light path. In addition, improvements to field calibration and verification, and flight operability and reliability are addressed.
Mass Memory Storage Devices for AN/SLQ-32(V).
1985-06-01
tactical programs and libraries into the AN/UYK-19 computer , the RP-16 microprocessor, and other peripheral processors (e.g., ADLS and Band 1) will be...software must be loaded into computer memory from the 4-track magnetic tape cartridges (MTCs) on which the programs are stored. Program load begins...software. Future computer programs , which will reside in peripheral processors, include the Automated Decoy Launching System (ADLS) and Band 1. As
The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD
NASA Astrophysics Data System (ADS)
Cox, M. A.; Reed, R.; Mellado, B.
2015-01-01
After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented.
Davidson, Lisa S; Geers, Ann E; Brenner, Christine
2010-10-01
Updated cochlear implant technology and optimized fitting can have a substantial impact on speech perception. The effects of upgrades in processor technology and aided thresholds on word recognition at soft input levels and sentence recognition in noise were examined. We hypothesized that updated speech processors and lower aided thresholds would allow improved recognition of soft speech without compromising performance in noise. 109 teenagers who had used a Nucleus 22-cochlear implant since preschool were tested with their current speech processor(s) (101 unilateral and 8 bilateral): 13 used the Spectra, 22 the ESPrit 22, 61 the ESPrit 3G, and 13 the Freedom. The Lexical Neighborhood Test (LNT) was administered at 70 and 50 dB SPL and the Bamford Kowal Bench sentences were administered in quiet and in noise. Aided thresholds were obtained for frequency-modulated tones from 250 to 4,000 Hz. Results were analyzed using repeated measures analysis of variance. Aided thresholds for the Freedom/3G group were significantly lower (better) than the Spectra/Sprint group. LNT scores at 50 dB were significantly higher for the Freedom/3G group. No significant differences between the 2 groups were found for the LNT at 70 or sentences in quiet or noise. Adolescents using updated processors that allowed for aided detection thresholds of 30 dB HL or better performed the best at soft levels. The BKB in noise results suggest that greater access to soft speech does not compromise listening in noise.
Measuring Contours of Coal-Seam Cuts
NASA Technical Reports Server (NTRS)
1983-01-01
Angle transducers measure angle between track sections as longwall shearer proceeds along coal face. Distance transducer functions in conjunction with angle transducers to obtain relative angles at known positions. When cut is complete, accumulated data are stored on cassette tape, and track profile is computed and displayed. Micro-processor-based instrument integrates small changes in angle and distance.
Using Track Changes and Word Processor to Provide Corrective Feedback to Learners in Writing
ERIC Educational Resources Information Center
AbuSeileek, A. F.
2013-01-01
This study investigated the effect of computer-mediated corrective feedback types in an English as a foreign language (EFL) intact class over time. The participants were 64 English majors who were assigned randomly into three treatment conditions that gave and received computer-mediated corrective feedback while writing (track changes, word…
Lorens, Artur; Zgoda, Małgorzata; Obrycka, Anita; Skarżynski, Henryk
2010-12-01
Presently, there are only few studies examining the benefits of fine structure information in coding strategies. Against this background, this study aims to assess the objective and subjective performance of children experienced with the C40+ cochlear implant using the CIS+ coding strategy who were upgraded to the OPUS 2 processor using FSP and HDCIS. In this prospective study, 60 children with more than 3.5 years of experience with the C40+ cochlear implant were upgraded to the OPUS 2 processor and fit and tested with HDCIS (Interval I). After 3 months of experience with HDCIS, they were fit with the FSP coding strategy (Interval II) and tested with all strategies (FSP, HDCIS, CIS+). After an additional 3-4 months, they were assessed on all three strategies and asked to choose their take-home strategy (Interval III). The children were tested using the Adaptive Auditory Speech Test which measures speech reception threshold (SRT) in quiet and noise at each test interval. The children were also asked to rate on a Visual Analogue Scale their satisfaction and coding strategy preference when listening to speech and a pop song. However, since not all tests could be performed at one single visit, some children were not able complete all tests at all intervals. At the study endpoint, speech in quiet showed a significant difference in SRT of 1.0 dB between FSP and HDCIS, with FSP performing better. FSP proved a better strategy compared with CIS+, showing lower SRT results of 5.2 dB. Speech in noise tests showed FSP to be significantly better than CIS+ by 0.7 dB, and HDCIS to be significantly better than CIS+ by 0.8 dB. Both satisfaction and coding strategy preference ratings also revealed that FSP and HDCIS strategies were better than CIS+ strategy when listening to speech and music. FSP was better than HDCIS when listening to speech. This study demonstrates that long-term pediatric users of the COMBI 40+ are able to upgrade to a newer processor and coding strategy without compromising their listening performance and even improving their performance with FSP after a short time of experience. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah; Vick, Andy; Schnetler, Hermine
2014-08-01
We present wavefront reconstruction acceleration of high-order AO systems using an Intel Xeon Phi processor. The Xeon Phi is a coprocessor providing many integrated cores and designed for accelerating compute intensive, numerical codes. Unlike other accelerator technologies, it allows virtually unchanged C/C++ to be recompiled to run on the Xeon Phi, giving the potential of making development, upgrade and maintenance faster and less complex. We benchmark the Xeon Phi in the context of AO real-time control by running a matrix vector multiply (MVM) algorithm. We investigate variability in execution time and demonstrate a substantial speed-up in loop frequency. We examine the integration of a Xeon Phi into an existing RTC system and show that performance improvements can be achieved with limited development effort.
Commissioning of the upgraded CSC Endcap Muon Port Cards at CMS
NASA Astrophysics Data System (ADS)
Ecklund, K.; Liu, J.; Madorsky, A.; Matveev, M.; Michlin, B.; Padley, P.; Rorie, J.
2016-01-01
There are 180 1.6 Gbps optical links from 60 Muon Port Cards (MPC) to the Cathode Strip Chamber Track Finder (CSCTF) in the original system. Before the upgrade each MPC was able to provide up to three trigger primitives from a cluster of nine CSC chambers to the Level 1 CSCTF. With an LHC luminosity increase to 1035 cm-2s-1 at full energy of 7 TeV/beam, the simulation studies suggest that we can expect two or three times more trigger primitives per bunch crossing from the front-end electronics. To comply with this requirement, the MPC, CSCTF, and optical cables need to be upgraded. The upgraded MPC allows transmission of up to 18 trigger primitives from the peripheral crate. This feature would allow searches for physics signatures of muon jets that require more trigger primitives per trigger sector. At the same time, it is very desirable to preserve all the old optical links for compatibility with the older Track Finder during transition period at the beginning of Run 2. Installation of the upgraded MPC boards and the new optical cables has been completed at the CMS detector in the summer of 2014. We describe the final design of the new MPC mezzanine FPGA, its firmware, and results of tests in laboratory and in situ with the old and new CSCTF boards.
Network control processor for a TDMA system
NASA Astrophysics Data System (ADS)
Suryadevara, Omkarmurthy; Debettencourt, Thomas J.; Shulman, R. B.
Two unique aspects of designing a network control processor (NCP) to monitor and control a demand-assigned, time-division multiple-access (TDMA) network are described. The first involves the implementation of redundancy by synchronizing the databases of two geographically remote NCPs. The two sets of databases are kept in synchronization by collecting data on both systems, transferring databases, sending incremental updates, and the parallel updating of databases. A periodic audit compares the checksums of the databases to ensure synchronization. The second aspect involves the use of a tracking algorithm to dynamically reallocate TDMA frame space. This algorithm detects and tracks current and long-term load changes in the network. When some portions of the network are overloaded while others have excess capacity, the algorithm automatically calculates and implements a new burst time plan.
Parallel-Processing Test Bed For Simulation Software
NASA Technical Reports Server (NTRS)
Blech, Richard; Cole, Gary; Townsend, Scott
1996-01-01
Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).
Command, Control, Communications and Intelligence (C3I) Project Book: Fiscal Year 1992
1992-05-12
PSC-3 is a rugged, lightweight (less than 35 lbs including batteries and whip and mdium gain antennas) portable device capable of being paged while...as Materiel Change (NC) projects. They Include: TACFWR Wpgade NC; Water Entry Resolution NC; FIREFIWER Training Device Upgrade MC; and Backplmn Wiringj... devices consist of a sensor . processor addigital display ibplWsd on an Individual air defense mopon system(FAM and NWWI). Two models are in development
[Cost Analysis of Cochlear Implantation in Adults].
Raths, S; Lenarz, T; Lesinski-Schiedat, A; Flessa, S
2016-04-01
The number of implantation of cochlear implants has steadily risen in recent years. Reasons for this are an extension of indication criteria, demographic change, increased quality of life needs and greater acceptance. The consequences are rising expenditure for statutory health insurance (SHI) for cochlear implantation. A detailed calculation of lifetime costs from SHI's perspective for postlingually deafened adolescents and adults is essential in estimating future cost developments. Calculations are based on accounting data from the Hannover Medical School. With regard to further life expectancy, average costs of preoperative diagnosis, surgery, rehabilitation, follow-ups, processor upgrades and electrical maintenance were discounted to their present value at age of implantation. There is an inverse relation between cost of unilateral cochlear implantation and age of initial implantation. From SHI's perspective, the intervention costs between 36,001 and 68,970 € ($ 42,504-$ 81,429). The largest cost components are initial implantation and processor upgrades. Compared to the UK the cost of cochlear implantation in Germany seems to be significantly lower. In particular the costs of, rehabilitation and maintenance in Germany cause only a small percentage of total costs. Also, the costs during the first year of treatment seem comparatively low. With regard to future spending of SHI due to implant innovations and associated extension of indication, increasing cost may be suspected. © Georg Thieme Verlag KG Stuttgart · New York.
DOT National Transportation Integrated Search
2001-09-01
High-speed trains in the speed range of 100 to 160 mph require tracks of nearly perfect geometry and mechanical uniformity, when subjected to moving wheel loads. Therefore, this report briefly describes the remedies being used by various railroads to...
3D Tracking of individual growth factor receptors on polarized cells
NASA Astrophysics Data System (ADS)
Werner, James; Stich, Dominik; Cleyrat, Cedric; Phipps, Mary; Wadinger-Ness, Angela; Wilson, Bridget
We have been developing methods for following 3D motion of selected biomolecular species throughout mammalian cells. Our approach exploits a custom designed confocal microscope that uses a unique spatial filter geometry and active feedback 200 times/second to follow fast 3D motion. By exploiting new non-blinking quantum dots as fluorescence labels, individual molecular trajectories can be observed for several minutes. We also will discuss recent instrument upgrades, including the ability to perform spinning disk fluorescence microscopy on the whole mammalian cell performed simultaneously with 3D molecular tracking experiments. These instrument upgrades were used to quantify 3D heterogeneous transport of individual growth factor receptors (EGFR) on live human renal cortical epithelial cells.
Nontimber Output Assessments: Tracking Those Other Forest Products
J. Chamberlain; J. Munsell; S. Krugerc
2014-01-01
The Forest Service has been assessing timber product output (TPO) for more than 50 years by canvassing primary processors of industrial roundwood in each state on a 3â5 year cycle. TPO studies tracks what species are cut, from where they come, and what products are produced. Nontimber forest products (NTFPs) are important commodities and a valuable segment of the...
MAPS development for the ALICE ITS upgrade
NASA Astrophysics Data System (ADS)
Yang, P.; Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Gao, C.; Hillemanns, H.; Junique, A.; Kofarago, M.; Keil, M.; Kugathasan, T.; Kim, D.; Kim, J.; Lattuca, A.; Marin Tobon, C. A.; Marras, D.; Mager, M.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Snoeys, W.; Siddhanta, S.; Usai, G.; van Hoorne, J. W.; Yi, J.
2015-03-01
Monolithic Active Pixel Sensors (MAPS) offer the possibility to build pixel detectors and tracking layers with high spatial resolution and low material budget in commercial CMOS processes. Significant progress has been made in the field of MAPS in recent years, and they are now considered for the upgrades of the LHC experiments. This contribution will focus on MAPS detectors developed for the ALICE Inner Tracking System (ITS) upgrade and manufactured in the TowerJazz 180 nm CMOS imaging sensor process on wafers with a high resistivity epitaxial layer. Several sensor chip prototypes have been developed and produced to optimise both charge collection and readout circuitry. The chips have been characterised using electrical measurements, radioactive sources and particle beams. The tests indicate that the sensors satisfy the ALICE requirements and first prototypes with the final size of 1.5 × 3 cm2 have been produced in the first half of 2014. This contribution summarises the characterisation measurements and presents first results from the full-scale chips.
SAMPA Chip: the New 32 Channels ASIC for the ALICE TPC and MCH Upgrades
NASA Astrophysics Data System (ADS)
Adolfsson, J.; Ayala Pabon, A.; Bregant, M.; Britton, C.; Brulin, G.; Carvalho, D.; Chambert, V.; Chinellato, D.; Espagnon, B.; Hernandez Herrera, H. D.; Ljubicic, T.; Mahmood, S. M.; Mjörnmark, U.; Moraes, D.; Munhoz, M. G.; Noël, G.; Oskarsson, A.; Osterman, L.; Pilyar, A.; Read, K.; Ruette, A.; Russo, P.; Sanches, B. C. S.; Severo, L.; Silvermyr, D.; Suire, C.; Tambave, G. J.; Tun-Lanoë, K. M. M.; van Noije, W.; Velure, A.; Vereschagin, S.; Wanlin, E.; Weber, T. O.; Zaporozhets, S.
2017-04-01
This paper presents the test results of the second prototype of SAMPA, the ASIC designed for the upgrade of read-out front end electronics of the ALICE Time Projection Chamber (TPC) and Muon Chamber (MCH). SAMPA is made in a 130 nm CMOS technology with 1.25 V nominal voltage supply and provides 32 channels, with selectable input polarity, and three possible combinations of shaping time and sensitivity. Each channel consists of a Charge Sensitive Amplifier, a semi-Gaussian shaper and a 10-bit ADC; a Digital Signal Processor provides digital filtering and compression capability. In the second prototype run both full chip and single test blocks were fabricated, allowing block characterization and full system behaviour studies. Experimental results are here presented showing agreement with requirements for both the blocks and the full chip.
NASA Astrophysics Data System (ADS)
Bianco, M.; Martoiu, S.; Sidiropoulou, O.; Zibell, A.
2015-12-01
A Micromegas (MM) quadruplet prototype with an active area of 0.5 m2 that adopts the general design foreseen for the upgrade of the innermost forward muon tracking systems (Small Wheels) of the ATLAS detector in 2018-2019, has been built at CERN and is going to be tested in the ATLAS cavern environment during the LHC RUN-II period 2015-2017. The integration of this prototype detector into the ATLAS data acquisition system using custom ATCA equipment is presented. An ATLAS compatible Read Out Driver (ROD) based on the Scalable Readout System (SRS), the Scalable Readout Unit (SRU), will be used in order to transmit the data after generating valid event fragments to the high-level Read Out System (ROS). The SRU will be synchronized with the LHC bunch crossing clock (40.08 MHz) and will receive the Level-1 trigger signals from the Central Trigger Processor (CTP) through the TTCrx receiver ASIC. The configuration of the system will be driven directly from the ATLAS Run Control System. By using the ATLAS TDAQ Software, a dedicated Micromegas segment has been implemented, in order to include the detector inside the main ATLAS DAQ partition. A full set of tests, on the hardware and software aspects, is presented.
Automatic detection, tracking and sensor integration
NASA Astrophysics Data System (ADS)
Trunk, G. V.
1988-06-01
This report surveys the state of the art of automatic detection, tracking, and sensor integration. In the area of detection, various noncoherent integrators such as the moving window integrator, feedback integrator, two-pole filter, binary integrator, and batch processor are discussed. Next, the three techniques for controlling false alarms, adapting thresholds, nonparametric detectors, and clutter maps are presented. In the area of tracking, a general outline is given of a track-while-scan system, and then a discussion is presented of the file system, contact-entry logic, coordinate systems, tracking filters, maneuver-following logic, tracking initiating, track-drop logic, and correlation procedures. Finally, in the area of multisensor integration the problems of colocated-radar integration, multisite-radar integration, radar-IFF integration, and radar-DF bearing strobe integration are treated.
NEP technology: FY 1992 milestones (NASA LeRC)
NASA Technical Reports Server (NTRS)
Sovey, Jim
1993-01-01
A discussion of Nuclear Electric Propulsion (NEP) thrusters and facilities is presented in vugraph form. The NEP thrusters are discussed in the context of the following three items: (1) establishing a 100 H test capability for 100-kW magnetoplasmadynamic (MPD) thrusters; (2) demonstrating a lightweight 20-kW krypton ion thruster; and (3) the optimization of the design of low-mass power processor transformers. The primary accomplishment at NEP facilities was the completion of the Electric Propulsion Laboratory's (EPL's) tank 5 cryopump upgrade.
Detector Developments for the High Luminosity LHC Era (4/4)
Bortoletto, Daniela
2018-02-09
Tracking Detectors - Part II. Calorimetry, muon detection, vertexing, and tracking will play a central role in determining the physics reach for the High Luminosity LHC Era. In these lectures we will cover the requirements, options, and the R&D; efforts necessary to upgrade the current LHC detectors and enabling discoveries.
Detector Developments for the High Luminosity LHC Era (3/4)
Bortoletto, Daniela
2018-01-23
Tracking Detectors - Part I. Calorimetry, muon detection, vertexing, and tracking will play a central role in determining the physics reach for the High Luminosity LHC Era. In these lectures we will cover the requirements, options, and the R&D; efforts necessary to upgrade the current LHC detectors and enabling discoveries.
The economics of data acquisition computers for ST and MST radars
NASA Technical Reports Server (NTRS)
Watkins, B. J.
1983-01-01
Some low cost options for data acquisition computers for ST (stratosphere, troposphere) and MST (mesosphere, stratosphere, troposphere) are presented. The particular equipment discussed reflects choices made by the University of Alaska group but of course many other options exist. The low cost microprocessor and array processor approach presented here has several advantages because of its modularity. An inexpensive system may be configured for a minimum performance ST radar, whereas a multiprocessor and/or a multiarray processor system may be used for a higher performance MST radar. This modularity is important for a network of radars because the initial cost is minimized while future upgrades will still be possible at minimal expense. This modularity also aids in lowering the cost of software development because system expansions should rquire little software changes. The functions of the radar computer will be to obtain Doppler spectra in near real time with some minor analysis such as vector wind determination.
Multicore Challenges and Benefits for High Performance Scientific Computing
Nielsen, Ida M. B.; Janssen, Curtis L.
2008-01-01
Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less
Evaluation of the Intel iWarp parallel processor for space flight applications
NASA Technical Reports Server (NTRS)
Hine, Butler P., III; Fong, Terrence W.
1993-01-01
The potential of a DARPA-sponsored advanced processor, the Intel iWarp, for use in future SSF Data Management Systems (DMS) upgrades is evaluated through integration into the Ames DMS testbed and applications testing. The iWarp is a distributed, parallel computing system well suited for high performance computing applications such as matrix operations and image processing. The system architecture is modular, supports systolic and message-based computation, and is capable of providing massive computational power in a low-cost, low-power package. As a consequence, the iWarp offers significant potential for advanced space-based computing. This research seeks to determine the iWarp's suitability as a processing device for space missions. In particular, the project focuses on evaluating the ease of integrating the iWarp into the SSF DMS baseline architecture and the iWarp's ability to support computationally stressing applications representative of SSF tasks.
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei
1991-01-01
The feasibility is analyzed of upgrading the Intel 386 microprocessor, which has been proposed as the baseline processor for the Space Station Freedom (SSF) Data Management System (DMS), to the more advanced i486 microprocessors. The items compared between the two processors include the instruction set architecture, power consumption, the MIL-STD-883C Class S (Space) qualification schedule, and performance. The advantages of the i486 over the 386 are (1) lower power consumption; and (2) higher floating point performance. The i486 on-chip cache does not have parity check or error detection and correction circuitry. The i486 with on-chip cache disabled, however, has lower integer performance than the 386 without cache, which is the current DMS design choice. Adding cache to the 386/386 DX memory hierachy appears to be the most beneficial change to the current DMS design at this time.
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei
1991-01-01
The feasibility is analyzed of upgrading the Intel 386 microprocessor, which has been proposed as the baseline processor for the Space Station Freedom (SSF) Data Management System (DMS), to the more advanced i486 microprocessors. The items compared between the two processors include the instruction set architecture, power consumption, the MIL-STD-883C Class S (Space) qualification schedule, and performance. The advantages of the i486 over the 386 are (1) lower power consumption; and (2) higher floating point performance. The i486 on-chip cache does not have parity check or error detection and correction circuitry. The i486 with on-chip cache disabled, however, has lower integer performance than the 386 without cache, which is the current DMS design choice. Adding cache to the 386/387 DX memory hierarchy appears to be the most beneficial change to the current DMS design at this time.
JPL's Real-Time Weather Processor project (RWP) metrics and observations at system completion
NASA Technical Reports Server (NTRS)
Loesh, Robert E.; Conover, Robert A.; Malhotra, Shan
1990-01-01
As an integral part of the overall upgraded National Airspace System (NAS), the objective of the Real-Time Weather Processor (RWP) project is to improve the quality of weather information and the timeliness of its dissemination to system users. To accomplish this, an RWP will be installed in each of the Center Weather Service Units (CWSUs), located in 21 of the 23 Air Route Traffic Control Centers (ARTCCs). The RWP System is a prototype system. It is planned that the software will be GFE and that production hardware will be acquired via industry competitive procurement. The ARTCC is a facility established to provide air traffic control service to aircraft operating on Instrument Flight Rules (IFR) flight plans within controlled airspace, principally during the en route phase of the flight. Covered here are requirement metrics, Software Problem Failure Reports (SPFRs), and Ada portability metrics and observations.
The UNAVCO Real-time GPS Data Processing System and Community Reference Data Sets
NASA Astrophysics Data System (ADS)
Sievers, C.; Mencin, D.; Berglund, H. T.; Blume, F.; Meertens, C. M.; Mattioli, G. S.
2013-12-01
UNAVCO has constructed a real-time GPS (RT-GPS) network of 420 GPS stations. The majority of the streaming stations come from the EarthScope Plate Boundary Observatory (PBO) through an NSF-ARRA funded Cascadia Upgrade Initiative that upgraded 100 backbone stations throughout the PBO footprint and 282 stations focused in the Pacific Northwest. Additional contributions from NOAA (~30 stations in Southern California) and the USGS (8 stations at Yellowstone) account for the other real-time stations. Based on community based outcomes of a workshop focused on real-time GPS position data products and formats hosted by UNAVCO in Spring of 2011, UNAVCO now provides real-time PPP positions for all 420 stations using Trimble's PIVOT software and for 50 stations using TrackRT at the volcanic centers located at Yellowstone (Figure 1 shows an example ensemble of TrackRT networks used in processing the Yellowstone data), Mt St Helens, and Montserrat. The UNAVCO real-time system has the potential to enhance our understanding of earthquakes, seismic wave propagation, volcanic eruptions, magmatic intrusions, movement of ice, landslides, and the dynamics of the atmosphere. Beyond its increasing uses for science and engineering, RT-GPS has the potential to provide early warning of hazards to emergency managers, utilities, other infrastructure managers, first responders and others. With the goal of characterizing stability and improving software and higher level products based on real-time GPS time series, UNAVCO is developing an open community standard data set where data processors can provide solutions based on common sets of RT-GPS data which simulate real world scenarios and events. UNAVCO is generating standard data sets for playback that include not only real and synthetic events but also background noise, antenna movement (e.g., steps, linear trends, sine waves, and realistic earthquake-like motions), receiver drop out and online return, interruption of communications (such as, bulk regional failures due to specific carriers during an actual event), satellites rising and setting, various constellation outages and differences in performance between real-time and simulated (retroactive) real-time. We present an overview of the UNAVCO RT-GPS system, a comparison of the UNAVCO generated real-time data products, and an overview of available common data sets.
Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor
Wei, Ching-Chuan; Song, Yu-Chang; Chang, Chia-Chi; Lin, Chuan-Bi
2016-01-01
Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi). Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time. PMID:27898002
Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor.
Wei, Ching-Chuan; Song, Yu-Chang; Chang, Chia-Chi; Lin, Chuan-Bi
2016-11-25
Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi). Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time.
Compiler analysis for irregular problems in FORTRAN D
NASA Technical Reports Server (NTRS)
Vonhanxleden, Reinhard; Kennedy, Ken; Koelbel, Charles; Das, Raja; Saltz, Joel
1992-01-01
We developed a dataflow framework which provides a basis for rigorously defining strategies to make use of runtime preprocessing methods for distributed memory multiprocessors. In many programs, several loops access the same off-processor memory locations. Our runtime support gives us a mechanism for tracking and reusing copies of off-processor data. A key aspect of our compiler analysis strategy is to determine when it is safe to reuse copies of off-processor data. Another crucial function of the compiler analysis is to identify situations which allow runtime preprocessing overheads to be amortized. This dataflow analysis will make it possible to effectively use the results of interprocedural analysis in our efforts to reduce interprocessor communication and the need for runtime preprocessing.
Mobile Telemetry Van Remote Control Upgrade
2012-05-17
Advantages of Remote Control System Upgrade • Summary Overview • Remote control of Telemetry Mobile Ground Support ( TMGS ) Van proposed to allow...NWC) personnel provided valuable data for full-function remote control of telemetry tracking vans Background • TMGS Vans support Flight Test...control capability from main TM site at Building 5790 currently allows support via TMGS Van at nearby C- 15 Site, Plant 42 in Palmdale, and as far
Generic element processor (application to nonlinear analysis)
NASA Technical Reports Server (NTRS)
Stanley, Gary
1989-01-01
The focus here is on one aspect of the Computational Structural Mechanics (CSM) Testbed: finite element technology. The approach involves a Generic Element Processor: a command-driven, database-oriented software shell that facilitates introduction of new elements into the testbed. This shell features an element-independent corotational capability that upgrades linear elements to geometrically nonlinear analysis, and corrects the rigid-body errors that plague many contemporary plate and shell elements. Specific elements that have been implemented in the Testbed via this mechanism include the Assumed Natural-Coordinate Strain (ANS) shell elements, developed with Professor K. C. Park (University of Colorado, Boulder), a new class of curved hybrid shell elements, developed by Dr. David Kang of LPARL (formerly a student of Professor T. Pian), other shell and solid hybrid elements developed by NASA personnel, and recently a repackaged version of the workhorse shell element used in the traditional STAGS nonlinear shell analysis code. The presentation covers: (1) user and developer interfaces to the generic element processor, (2) an explanation of the built-in corotational option, (3) a description of some of the shell-elements currently implemented, and (4) application to sample nonlinear shell postbuckling problems.
Real-Time Data Processing in the muon system of the D0 detector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neeti Parashar et al.
2001-07-03
This paper presents a real-time application of the 16-bit fixed point Digital Signal Processors (DSPs), in the Muon System of the D0 detector located at the Fermilab Tevatron, presently the world's highest-energy hadron collider. As part of the Upgrade for a run beginning in the year 2000, the system is required to process data at an input event rate of 10 KHz without incurring significant deadtime in readout. The ADSP21csp01 processor has high I/O bandwidth, single cycle instruction execution and fast task switching support to provide efficient multisignal processing. The processor's internal memory consists of 4K words of Program Memorymore » and 4K words of Data Memory. In addition there is an external memory of 32K words for general event buffering and 16K words of Dual port Memory for input data queuing. This DSP fulfills the requirement of the Muon subdetector systems for data readout. All error handling, buffering, formatting and transferring of the data to the various trigger levels of the data acquisition system is done in software. The algorithms developed for the system complete these tasks in about 20 {micro}s per event.« less
An experiment in hurricane track prediction using parallel computing methods
NASA Technical Reports Server (NTRS)
Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.
1994-01-01
The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.
Video Guidance Sensor and Time-of-Flight Rangefinder
NASA Technical Reports Server (NTRS)
Bryan, Thomas; Howard, Richard; Bell, Joseph L.; Roe, Fred D.; Book, Michael L.
2007-01-01
A proposed video guidance sensor (VGS) would be based mostly on the hardware and software of a prior Advanced VGS (AVGS), with some additions to enable it to function as a time-of-flight rangefinder (in contradistinction to a triangulation or image-processing rangefinder). It would typically be used at distances of the order of 2 or 3 kilometers, where a typical target would appear in a video image as a single blob, making it possible to extract the direction to the target (but not the orientation of the target or the distance to the target) from a video image of light reflected from the target. As described in several previous NASA Tech Briefs articles, an AVGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. In the original application, the two vehicles are spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In a prior AVGS system of the type upon which the now-proposed VGS is largely based, the tracked vehicle is equipped with one or more passive targets that reflect light from one or more continuous-wave laser diode(s) on the tracking vehicle, a video camera on the tracking vehicle acquires images of the targets in the reflected laser light, the video images are digitized, and the image data are processed to obtain the direction to the target. The design concept of the proposed VGS does not call for any memory or processor hardware beyond that already present in the prior AVGS, but does call for some additional hardware and some additional software. It also calls for assignment of some additional tasks to two subsystems that are parts of the prior VGS: a field-programmable gate array (FPGA) that generates timing and control signals, and a digital signal processor (DSP) that processes the digitized video images. The additional timing and control signals generated by the FPGA would cause the VGS to alternate between an imaging (direction-finding) mode and a time-of-flight (range-finding mode) and would govern operation in the range-finding mode.
Hardware accelerator design for tracking in smart camera
NASA Astrophysics Data System (ADS)
Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil
2011-10-01
Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor A.
2012-01-01
Initial optical communications experiments with a Vertex polished aluminum panel have been described. The polished panel was mounted on the main reflector of the DSN's research antenna at DSS-13. The PSF was recorded via remotely controlled digital camera mounted on the subreflector structure. Initial PSF generated by Jupiter showed significant tilt error and some mechanical deformation. After upgrades, the PSF improved significantly, leading to much better concentration of light. Communications performance of the initial and upgraded panel structure were compared. After the upgrades, simulated PPM symbol error probability decreased by six orders of magnitude. Work is continuing to demonstrate closed-loop tracking of sources from zenith to horizon, and better characterize communications performance in realistic daytime background environments.
NASA Technical Reports Server (NTRS)
1994-01-01
The objective of this contract was the investigation of the potential performance gains that would result from an upgrade of the Space Station Freedom (SSF) Data Management System (DMS) Embedded Data Processor (EDP) '386' design with the Intel Pentium (registered trade-mark of Intel Corp.) '586' microprocessor. The Pentium ('586') is the latest member of the industry standard Intel X86 family of CISC (Complex Instruction Set Computer) microprocessors. This contract was scheduled to run in parallel with an internal IBM Federal Systems Company (FSC) Internal Research and Development (IR&D) task that had the goal to generate a baseline flight design for an upgraded EDP using the Pentium. This final report summarizes the activities performed in support of Contract NAS2-13758. Our plan was to baseline performance analyses and measurements on the latest state-of-the-art commercially available Pentium processor, representative of the proposed space station design, and then phase to an IBM capital funded breadboard version of the flight design (if available from IR&D and Space Station work) for additional evaluation of results. Unfortunately, the phase-over to the flight design breadboard did not take place, since the IBM Data Management System (DMS) for the Space Station Freedom was terminated by NASA before the referenced capital funded EDP breadboard could be completed. The baseline performance analyses and measurements, however, were successfully completed, as planned, on the commercial Pentium hardware. The results of those analyses, evaluations, and measurements are presented in this final report.
Multiple Hypothesis Tracking (MHT) for Space Surveillance: Results and Simulation Studies
2013-09-01
processor. 1 . INTRODUCTION The Joint Space Operations Center (JSpOC) currently tracks more than 22,000 satellites and space debris orbiting the Earth... 1 , 2]. With the anticipated installation of more accurate sensors and the increased probability of future collisions between space objects, the...average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed
NASA Astrophysics Data System (ADS)
Ohene-Kwofie, Daniel; Otoo, Ekow
2015-10-01
The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level. We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput.
2007-12-11
Implemented both carrier and code phase tracking loop for performance evaluation of a minimum power beam forming algorithm and null steering algorithm...4 Antennal Antenna2 Antenna K RF RF RF ct, Ct~2 ChKx1 X2 ....... Xk A W ~ ~ =Z, x W ,=1 Fig. 5. Schematics of a K-element antenna array spatial...adaptive processor Antennal Antenna K A N-i V/ ( Vil= .i= VK Fig. 6. Schematics of a K-element antenna array space-time adaptive processor Two additional
Michael H. L. S. Wang; Cancelo, Gustavo; Green, Christopher; ...
2016-06-25
Here, we explore the Micron Automata Processor (AP) as a suitable commodity technology that can address the growing computational needs of pattern recognition in High Energy Physics (HEP) experiments. A toy detector model is developed for which an electron track confirmation trigger based on the Micron AP serves as a test case. Although primarily meant for high speed text-based searches, we demonstrate a proof of concept for the use of the Micron AP in a HEP trigger application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael H. L. S. Wang; Cancelo, Gustavo; Green, Christopher
Here, we explore the Micron Automata Processor (AP) as a suitable commodity technology that can address the growing computational needs of pattern recognition in High Energy Physics (HEP) experiments. A toy detector model is developed for which an electron track confirmation trigger based on the Micron AP serves as a test case. Although primarily meant for high speed text-based searches, we demonstrate a proof of concept for the use of the Micron AP in a HEP trigger application.
ACE: Automatic Centroid Extractor for real time target tracking
NASA Technical Reports Server (NTRS)
Cameron, K.; Whitaker, S.; Canaris, J.
1990-01-01
A high performance video image processor has been implemented which is capable of grouping contiguous pixels from a raster scan image into groups and then calculating centroid information for each object in a frame. The algorithm employed to group pixels is very efficient and is guaranteed to work properly for all convex shapes as well as most concave shapes. Processing speeds are adequate for real time processing of video images having a pixel rate of up to 20 million pixels per second. Pixels may be up to 8 bits wide. The processor is designed to interface directly to a transputer serial link communications channel with no additional hardware. The full custom VLSI processor was implemented in a 1.6 mu m CMOS process and measures 7200 mu m on a side.
Software Defined GPS Receiver for International Space Station
NASA Technical Reports Server (NTRS)
Duncan, Courtney B.; Robison, David E.; Koelewyn, Cynthia Lee
2011-01-01
JPL is providing a software defined radio (SDR) that will fly on the International Space Station (ISS) as part of the CoNNeCT project under NASA's SCaN program. The SDR consists of several modules including a Baseband Processor Module (BPM) and a GPS Module (GPSM). The BPM executes applications (waveforms) consisting of software components for the embedded SPARC processor and logic for two Virtex II Field Programmable Gate Arrays (FPGAs) that operate on data received from the GPSM. GPS waveforms on the SDR are enabled by an L-Band antenna, low noise amplifier (LNA), and the GPSM that performs quadrature downconversion at L1, L2, and L5. The GPS waveform for the JPL SDR will acquire and track L1 C/A, L2C, and L5 GPS signals from a CoNNeCT platform on ISS, providing the best GPS-based positioning of ISS achieved to date, the first use of multiple frequency GPS on ISS, and potentially the first L5 signal tracking from space. The system will also enable various radiometric investigations on ISS such as local multipath or ISS dynamic behavior characterization. In following the software-defined model, this work will create a highly portable GPS software and firmware package that can be adapted to another platform with the necessary processor and FPGA capability. This paper also describes ISS applications for the JPL CoNNeCT SDR GPS waveform, possibilities for future global navigation satellite system (GNSS) tracking development, and the applicability of the waveform components to other space navigation applications.
NASA Astrophysics Data System (ADS)
Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.
1986-06-01
In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.
The Telecommunications and Data Acquisition Report
NASA Technical Reports Server (NTRS)
Posner, E. C. (Editor)
1988-01-01
Deep Space Network and Systems topics addressed include: tracking and ground-base navigation; communications, spacecraft-ground; station control and system technology; capabilities for existing projects; and network upgrading and sustaining.
Rectangular Array Of Digital Processors For Planning Paths
NASA Technical Reports Server (NTRS)
Kemeny, Sabrina E.; Fossum, Eric R.; Nixon, Robert H.
1993-01-01
Prototype 24 x 25 rectangular array of asynchronous parallel digital processors rapidly finds best path across two-dimensional field, which could be patch of terrain traversed by robotic or military vehicle. Implemented as single-chip very-large-scale integrated circuit. Excepting processors on edges, each processor communicates with four nearest neighbors along paths representing travel to north, south, east, and west. Each processor contains delay generator in form of 8-bit ripple counter, preset to 1 of 256 possible values. Operation begins with choice of processor representing starting point. Transmits signals to nearest neighbor processors, which retransmits to other neighboring processors, and process repeats until signals propagated across entire field.
NASA Technical Reports Server (NTRS)
2008-01-01
As Global Positioning Satellite (GPS) applications become more prevalent for land- and air-based vehicles, GPS applications for space vehicles will also increase. The Applied Technology Directorate of Kennedy Space Center (KSC) has developed a lightweight, low-cost GPS Metric Tracking Unit (GMTU), the first of two steps in developing a lightweight, low-cost Space-Based Tracking and Command Subsystem (STACS) designed to meet Range Safety's link margin and latency requirements for vehicle command and telemetry data. The goals of STACS are to improve Range Safety operations and expand tracking capabilities for space vehicles. STACS will track the vehicle, receive commands, and send telemetry data through the space-based asset, which will dramatically reduce dependence on ground-based assets. The other step was the Low-Cost Tracking and Data Relay Satellite System (TDRSS) Transceiver (LCT2), developed by the Wallops Flight Facility (WFF), which allows the vehicle to communicate with a geosynchronous relay satellite. Although the GMTU and LCT2 were independently implemented and tested, the design collaboration of KSC and WFF engineers allowed GMTU and LCT2 to be integrated into one enclosure, leading to the final STACS. In operation, GMTU needs only a radio frequency (RF) input from a GPS antenna and outputs position and velocity data to the vehicle through a serial or pulse code modulation (PCM) interface. GMTU includes one commercial GPS receiver board and a custom board, the Command and Telemetry Processor (CTP) developed by KSC. The CTP design is based on a field-programmable gate array (FPGA) with embedded processors to support GPS functions.
A Multiple-Track Nursing Sequence: Supplement to Research Report No. 1.
ERIC Educational Resources Information Center
Gilpatrick, Eleanor
Following a survey of 2,361 practical nurses in New York City municipal hospitals in 1968, a specific multiple-track nursing sequence was developed to meet manpower shortages and upgrade licensed practical nurses (LPN's) to registered nurses (RN's) and nurse's aides (NA's) to LPN's. The two models designed were for use in New York City but it is…
Optimal processor assignment for pipeline computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath
1991-01-01
The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.
Software Implemented Fault-Tolerant (SIFT) user's guide
NASA Technical Reports Server (NTRS)
Green, D. F., Jr.; Palumbo, D. L.; Baltrus, D. W.
1984-01-01
Program development for a Software Implemented Fault Tolerant (SIFT) computer system is accomplished in the NASA LaRC AIRLAB facility using a DEC VAX-11 to interface with eight Bendix BDX 930 flight control processors. The interface software which provides this SIFT program development capability was developed by AIRLAB personnel. This technical memorandum describes the application and design of this software in detail, and is intended to assist both the user in performance of SIFT research and the systems programmer responsible for maintaining and/or upgrading the SIFT programming environment.
Processor design optimization methodology for synthetic vision systems
NASA Astrophysics Data System (ADS)
Wren, Bill; Tarleton, Norman G.; Symosek, Peter F.
1997-06-01
Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.
A Queue Simulation Tool for a High Performance Scientific Computing Center
NASA Technical Reports Server (NTRS)
Spear, Carrie; McGalliard, James
2007-01-01
The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.
Electro-optic tracking R&D for defense surveillance
NASA Astrophysics Data System (ADS)
Sutherland, Stuart; Woodruff, Chris J.
1995-09-01
Two aspects of work on automatic target detection and tracking for electro-optic (EO) surveillance are described. Firstly, a detection and tracking algorithm test-bed developed by DSTO and running on a PC under Windows NT is being used to assess candidate algorithms for unresolved and minimally resolved target detection. The structure of this test-bed is described and examples are given of its user interfaces and outputs. Secondly, a development by Australian industry under a Defence-funded contract, of a reconfigurable generic track processor (GTP) is outlined. The GTP will include reconfigurable image processing stages and target tracking algorithms. It will be used to demonstrate to the Australian Defence Force automatic detection and tracking capabilities, and to serve as a hardware base for real time algorithm refinement.
NASA Technical Reports Server (NTRS)
Tarshish, Adina; Salmon, Ellen
1994-01-01
In October 1992, the NASA Center for Computational Sciences made its Convex-based UniTree system generally available to users. The ensuing months saw growth in every area. Within 26 months, data under UniTree control grew from nil to over 12 terabytes, nearly all of it stored on robotically mounted tape. HiPPI/UltraNet was added to enhance connectivity, and later HiPPI/TCP was added as well. Disks and robotic tape silos were added to those already under UniTree's control, and 18-track tapes were upgraded to 36-track. The primary data source for UniTree, the facility's Cray Y-MP/4-128, first doubled its processing power and then was replaced altogether by a C98/6-256 with nearly two-and-a-half times the Y-MP's combined peak gigaflops. The Convex/UniTree software was upgraded from version 1.5 to 1.7.5, and then to 1.7.6. Finally, the server itself, a Convex C3240, was upgraded to a C3830 with a second I/O bay, doubling the C3240's memory and capacity for I/O. This paper describes insights gained and reinforced with the burgeoning demands on the UniTree storage system and the significant increases in performance gained from the many upgrades.
Tracking and Motion Analysis of Crack Propagations in Crystals for Molecular Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsap, L V; Duchaineau, M; Goldgof, D B
2001-05-14
This paper presents a quantitative analysis for a discovery in molecular dynamics. Recent simulations have shown that velocities of crack propagations in crystals under certain conditions can become supersonic, which is contrary to classical physics. In this research, they present a framework for tracking and motion analysis of crack propagations in crystals. It includes line segment extraction based on Canny edge maps, feature selection based on physical properties, and subsequent tracking of primary and secondary wavefronts. This tracking is completely automated; it runs in real time on three 834-image sequences using forty 250 MHZ processors. Results supporting physical observations aremore » presented in terms of both feature tracking and velocity analysis.« less
A Parallel Algorithm for Contact in a Finite Element Hydrocode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Timothy G.
A parallel algorithm is developed for contact/impact of multiple three dimensional bodies undergoing large deformation. As time progresses the relative positions of contact between the multiple bodies changes as collision and sliding occurs. The parallel algorithm is capable of tracking these changes and enforcing an impenetrability constraint and momentum transfer across the surfaces in contact. Portions of the various surfaces of the bodies are assigned to the processors of a distributed-memory parallel machine in an arbitrary fashion, known as the primary decomposition. A secondary, dynamic decomposition is utilized to bring opposing sections of the contacting surfaces together on the samemore » processors, so that opposing forces may be balanced and the resultant deformation of the bodies calculated. The secondary decomposition is accomplished and updated using only local communication with a limited subset of neighbor processors. Each processor represents both a domain of the primary decomposition and a domain of the secondary, or contact, decomposition. Thus each processor has four sets of neighbor processors: (a) those processors which represent regions adjacent to it in the primary decomposition, (b) those processors which represent regions adjacent to it in the contact decomposition, (c) those processors which send it the data from which it constructs its contact domain, and (d) those processors to which it sends its primary domain data, from which they construct their contact domains. The latter three of these neighbor sets change dynamically as the simulation progresses. By constraining all communication to these sets of neighbors, all global communication, with its attendant nonscalable performance, is avoided. A set of tests are provided to measure the degree of scalability achieved by this algorithm on up to 1024 processors. Issues related to the operating system of the test platform which lead to some degradation of the results are analyzed. This algorithm has been implemented as the contact capability of the ALE3D multiphysics code, and is currently in production use.« less
The Telecommunications and Data Acquisition Report
NASA Technical Reports Server (NTRS)
Posner, E. C. (Editor)
1987-01-01
Topics addressed include: tracking and ground-based navigation; communications, spacecraft-ground; station control and system technology; capabilities for existing projects; network upgrade and sustaining; mission interface and support; and Ka-band capabilities.
Development of Shanghai satellite laser ranging station
NASA Technical Reports Server (NTRS)
Yang, Fu-Min; Tan, De-Tong; Xiao, Chi-Kun; Chen, Wan-Zhen; Zhang, J.-H.; Zhang, Z.-P.; Lu, Wen-Hu; Hu, Z.-Q.; Tang, W.-F.; Chen, J.-P.
1993-01-01
The topics covered include the following: improvement of the system hardware; upgrading of the software; the observation status; preliminary daylight tracking capability; testing the new type of laser; and future plans.
Status report on the USGS component of the Global Seismographic Network
NASA Astrophysics Data System (ADS)
Gee, L. S.; Bolton, H. F.; Derr, J.; Ford, D.; Gyure, G.; Hutt, C. R.; Ringler, A.; Storm, T.; Wilson, D.
2010-12-01
As recently as four years ago, the average age of a datalogger in the portion of the Global Seismographic Network (GSN) operated by the United States Geological Survey (USGS) was 16 years - an eternity in the lifetime of computers. The selection of the Q330HR in 2006 as the “next generation” datalogger by an Incorporated Research Institutions for Seismology (IRIS) selection committee opened the door for upgrading the GSN. As part of the “next generation” upgrades, the USGS is replacing a single Q680 system with two Q330HRs and a field processor to provide the same capability. The functionality includes digitizing, timing, event detection, conversion into miniSEED records, archival of miniSEED data on the ASP and telemetry of the miniSEED data using International Deployment of Accelerometers (IDA) Authenticated Disk Protocol (IACP). At many sites, Quanterra Balers are also being deployed. The Q330HRs feature very low power consumption (which will increase reliability) and higher resolution than the Q680 systems. Furthermore, this network-wide upgrade provides the opportunity to correct known station problems, standardize the installation of secondary sensors and accelerometers, replace the feedback electronics of STS-1 sensors, and perform checks of absolute system sensitivity and sensor orientation. The USGS upgrades began with ANMO in May, 2008. Although we deployed Q330s at KNTN and WAKE in the fall of 2007 (and in the installation of the Caribbean network), these deployments did not include the final software configuration for the GSN upgrades. Following this start, the USGS installed six additional sites in FY08. With funding from the American Recovery and Reinvestment Act and the USGS GSN program, 14 stations were upgraded in FY09. Twenty-one stations are expected to be upgraded in FY10. These systematic network-wide upgrades will improve the reliability and data quality of the GSN, with the end goal of providing the Earth science community high quality seismic data with global coverage. The Global Seismographic Network is operated as a partnership among the National Science Foundation, IRIS, IDA, and the USGS.
Advanced electronics for the CTF MEG system.
McCubbin, J; Vrba, J; Spear, P; McKenzie, D; Willis, R; Loewen, R; Robinson, S E; Fife, A A
2004-11-30
Development of the CTF MEG system has been advanced with the introduction of a computer processing cluster between the data acquisition electronics and the host computer. The advent of fast processors, memory, and network interfaces has made this innovation feasible for large data streams at high sampling rates. We have implemented tasks including anti-alias filter, sample rate decimation, higher gradient balancing, crosstalk correction, and optional filters with a cluster consisting of 4 dual Intel Xeon processors operating on up to 275 channel MEG systems at 12 kHz sample rate. The architecture is expandable with additional processors to implement advanced processing tasks which may include e.g., continuous head localization/motion correction, optional display filters, coherence calculations, or real time synthetic channels (via beamformer). We also describe an electronics configuration upgrade to provide operator console access to the peripheral interface features such as analog signal and trigger I/O. This allows remote location of the acoustically noisy electronics cabinet and fitting of the cabinet with doors for improved EMI shielding. Finally, we present the latest performance results available for the CTF 275 channel MEG system including an unshielded SEF (median nerve electrical stimulation) measurement enhanced by application of an adaptive beamformer technique (SAM) which allows recognition of the nominal 20-ms response in the unaveraged signal.
Performance Models for Split-execution Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Schrock, Jonathan
Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardwaremore » limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.« less
A network control concept for the 30/20 GHz communication system baseband processor
NASA Technical Reports Server (NTRS)
Sabourin, D. J.; Hay, R. E.
1982-01-01
The architecture and system design for a satellite-switched TDMA communication system employing on-board processing was developed by Motorola for NASA's Lewis Research Center. The system design is based on distributed processing techniques that provide extreme flexibility in the selection of a network control protocol without impacting the satellite or ground terminal hardware. A network control concept that includes system synchronization and allows burst synchronization to occur within the system operational requirement is described. This concept integrates the tracking and control links with the communication links via the baseband processor, resulting in an autonomous system operational approach.
The Telecommunications and Data Aquisition Report
NASA Technical Reports Server (NTRS)
Posner, E. C. (Editor)
1983-01-01
Tracking and ground-based navigation techniques are discussed in relation to DSN advanced systems. Network data processing and productivity are studied to improve management planning methods. Project activities for upgrading DSN facilities are presented.
A 4.8 kbps code-excited linear predictive coder
NASA Technical Reports Server (NTRS)
Tremain, Thomas E.; Campbell, Joseph P., Jr.; Welch, Vanoy C.
1988-01-01
A secure voice system STU-3 capable of providing end-to-end secure voice communications (1984) was developed. The terminal for the new system will be built around the standard LPC-10 voice processor algorithm. The performance of the present STU-3 processor is considered to be good, its response to nonspeech sounds such as whistles, coughs and impulse-like noises may not be completely acceptable. Speech in noisy environments also causes problems with the LPC-10 voice algorithm. In addition, there is always a demand for something better. It is hoped that LPC-10's 2.4 kbps voice performance will be complemented with a very high quality speech coder operating at a higher data rate. This new coder is one of a number of candidate algorithms being considered for an upgraded version of the STU-3 in late 1989. The problems of designing a code-excited linear predictive (CELP) coder to provide very high quality speech at a 4.8 kbps data rate that can be implemented on today's hardware are considered.
Extending the International Space Station Life and Operability
NASA Technical Reports Server (NTRS)
Cecil, Andrew J.; Pitts, R. Lee; Sparks, Ray N.; Wickline, Thomas W.; Zoller, David A.
2012-01-01
The International Space Station (ISS) is in an operational configuration with final assembly complete. To fully utilize ISS and extend the operational life, it became necessary to upgrade and extend the onboard systems with the Obsolescence Driven Avionics Redesign (ODAR) project. ODAR enabled a joint project between the Johnson Space Center (JSC) and Marshall Space Flight Center (MSFC) focused on upgrading the onboard payload and Ku-Band systems, expanding the voice and video capabilities, and including more modern protocols allowing unprecedented access for payload investigators to their on-orbit payloads. The MSFC Huntsville Operations Support Center (HOSC) was tasked with developing a high-rate enhanced Functionally Distributed Processor (eFDP) to handle 300Mbps Return Link data, double the legacy rate, and incorporate a Line Outage Recorder (LOR). The eFDP also provides a 25Mbps uplink transmission rate with a Space Link Extension (SLE) interface. HOSC also updated the Payload Data Services System (PDSS) to incorporate the latest Consultative Committee for Space Data Systems (CCSDS) protocols, most notably the use of the Internet Protocol (IP) Encapsulation, in addition to the legacy capabilities. The Central Command Processor was also updated to interact with the new onboard and ground capabilities of Mission Control Center -- Houston (MCC-H) for the uplink functionality. The architecture, implementation, and lessons learned, including integration and incorporation of Commercial Off The Shelf (COTS) hardware and software into the operational mission of the ISS, is described herein. The applicability of this new technology provides new benefits to ISS payload users and ensures better utilization of the ISS by the science community
Upgrade of the ATLAS Tile Calorimeter Electronics
NASA Astrophysics Data System (ADS)
Moreno, Pablo; ATLAS Tile Calorimeter System
2016-04-01
The Tile Calorimeter (TileCal) is the hadronic calorimeter covering the central region of the ATLAS experiment at LHC. The TileCal readout consists of 9852 channels. The bulk of its upgrade will occur for the High Luminosity LHC phase (Phase II) where the peak luminosity will increase 5× compared to the design luminosity (1034 cm-2s-1) at center of mass energy of 14 TeV. The TileCal upgrade aims at replacing the majority of the on- and off-detector electronics to the extent that all calorimeter signals will be digitized and sent to the off-detector electronics in the counting room. To achieve the required reliability, redundancy has been introduced at different levels. Three different options are presently being investigated for the front-end electronic upgrade. Extensive test beam studies will determine which option will be selected. 10.24 Gbps optical links are used to read out all digitized data to the counting room while 4.8 Gbps down-links are used for synchronization, configuration and detector control. For the off-detector electronics a pre-processor (sROD) is being developed, which takes care of the initial trigger processing while temporarily storing the main data flow in pipeline and de-randomizer memories. Field Programmable Gate Arrays are extensively used for the logic functions off- and on-detector. One demonstrator prototype module with the new calorimeter module electronics, but still compatible with the present system, is planned to be inserted in ATLAS at the end of 2015.
Characterization of the Outer Barrel modules for the upgrade of the ALICE Inner Tracking System
NASA Astrophysics Data System (ADS)
Di Ruzza, B.
2017-09-01
ALICE is one of the four large detectors at the CERN LHC collider, designed to address the physics of strongly interacting matter, and in particular the properties of the Quark-Gluon Plasma using proton-proton, proton-nucleus, and nucleus-nucleus collisions. Despite the success already reached in achieving these physics goals, there are several measurements still to be finalized, like high precision measurements of rare probes (D mesons, Lambda baryons and B mesons decays) over a broad range of transverse momenta. In order to achieve these new physics goals, a wide upgrade plan was approved that combined with a significant increase of luminosity will enhance the ALICE physics capabilities enormously and will allow the achievement of these fundamental measurements. The Inner Tracking System (ITS) upgrade of the ALICE detector is one of the major improvements of the experimental set-up that will take place in 2019-2020 when the whole ITS sub-detector will be replaced with one realized using a innovative monolithic active pixel silicon sensor, called ALPIDE. The upgraded ITS will be realized using more than twenty-four thousand ALPIDE chips organized in seven different cylindrical layers, for a total surface of about ten square meters. The main features of the new ITS are a low material budget, high granularity and low power consumption. All these peculiar capabilities will allow for full reconstruction of rare heavy flavour decays and the achievement of the physics goals. In this paper after an overview of the whole ITS upgrade project, the construction procedure of the basic building block of the detector, namely the module, and its characterization in laboratory will be presented.
Flexible trigger menu implementation on the Global Trigger for the CMS Level-1 trigger upgrade
NASA Astrophysics Data System (ADS)
MATSUSHITA, Takashi;
2017-10-01
The CMS experiment at the Large Hadron Collider (LHC) has continued to explore physics at the high-energy frontier in 2016. The integrated luminosity delivered by the LHC in 2016 was 41 fb-1 with a peak luminosity of 1.5 × 1034 cm-2s-1 and peak mean pile-up of about 50, all exceeding the initial estimations for 2016. The CMS experiment has upgraded its hardware-based Level-1 trigger system to maintain its performance for new physics searches and precision measurements at high luminosities. The Global Trigger is the final step of the CMS Level-1 trigger and implements a trigger menu, a set of selection requirements applied to the final list of objects from calorimeter and muon triggers, for reducing the 40 MHz collision rate to 100 kHz. The Global Trigger has been upgraded with state-of-the-art FPGA processors on Advanced Mezzanine Cards with optical links running at 10 GHz in a MicroTCA crate. The powerful processing resources of the upgraded system enable implementation of more algorithms at a time than previously possible, allowing CMS to be more flexible in how it handles the available trigger bandwidth. Algorithms for a trigger menu, including topological requirements on multi-objects, can be realised in the Global Trigger using the newly developed trigger menu specification grammar. Analysis-like trigger algorithms can be represented in an intuitive manner and the algorithms are translated to corresponding VHDL code blocks to build a firmware. The grammar can be extended in future as the needs arise. The experience of implementing trigger menus on the upgraded Global Trigger system will be presented.
Monolithic active pixel sensor development for the upgrade of the ALICE inner tracking system
NASA Astrophysics Data System (ADS)
Aglieri, G.; Cavicchioli, C.; Chalmet, P. L.; Chanlek, N.; Collu, A.; Giubilato, P.; Hillemanns, H.; Junique, A.; Keil, M.; Kim, D.; Kim, J.; Kugathasan, T.; Lattuca, A.; Mager, M.; Marin Tobon, C. A.; Marras, D.; Martinengo, P.; Mattiazzo, S.; Mazza, G.; Mugnier, H.; Musa, L.; Pantano, D.; Puggioni, C.; Rousset, J.; Reidt, F.; Riedler, P.; Siddhanta, S.; Snoeys, W.; Usai, G.; van Hoorne, J. W.; Yang, P.; Yi, J.
2013-12-01
ALICE plans an upgrade of its Inner Tracking System for 2018. The development of a monolithic active pixel sensor for this upgrade is described. The TowerJazz 180 nm CMOS imaging sensor process has been chosen as it is possible to use full CMOS in the pixel due to the offering of a deep pwell and also to use different starting materials. The ALPIDE development is an alternative to approaches based on a rolling shutter architecture, and aims to reduce power consumption and integration time by an order of magnitude below the ALICE specifications, which would be quite beneficial in terms of material budget and background. The approach is based on an in-pixel binary front-end combined with a hit-driven architecture. Several prototypes have already been designed, submitted for fabrication and some of them tested with X-ray sources and particles in a beam. Analog power consumption has been limited by optimizing the Q/C of the sensor using Explorer chips. Promising but preliminary first results have also been obtained with a prototype ALPIDE. Radiation tolerance up to the ALICE requirements has also been verified.
Project Assessment Skills Web Application
NASA Technical Reports Server (NTRS)
Goff, Samuel J.
2013-01-01
The purpose of this project is to utilize Ruby on Rails to create a web application that will replace a spreadsheet keeping track of training courses and tasks. The goal is to create a fast and easy to use web application that will allow users to track progress on training courses. This application will allow users to update and keep track of all of the training required of them. The training courses will be organized by group and by user, making readability easier. This will also allow group leads and administrators to get a sense of how everyone is progressing in training. Currently, updating and finding information from this spreadsheet is a long and tedious task. By upgrading to a web application, finding and updating information will be easier than ever as well as adding new training courses and tasks. Accessing this data will be much easier in that users just have to go to a website and log in with NDC credentials rather than request the relevant spreadsheet from the holder. In addition to Ruby on Rails, I will be using JavaScript, CSS, and jQuery to help add functionality and ease of use to my web application. This web application will include a number of features that will help update and track progress on training. For example, one feature will be to track progress of a whole group of users to be able to see how the group as a whole is progressing. Another feature will be to assign tasks to either a user or a group of users. All of these together will create a user friendly and functional web application.
Control Software for Advanced Video Guidance Sensor
NASA Technical Reports Server (NTRS)
Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.
2006-01-01
Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.
NASA Astrophysics Data System (ADS)
Stark, Giordon; Atlas Collaboration
2015-04-01
The Global Feature Extraction (gFEX) module is a Level 1 jet trigger system planned for installation in ATLAS during the Phase 1 upgrade in 2018. The gFEX selects large-radius jets for capturing Lorentz-boosted objects by means of wide-area jet algorithms refined by subjet information. The architecture of the gFEX permits event-by-event local pile-up suppression for these jets using the same subtraction techniques developed for offline analyses. The gFEX architecture is also suitable for other global event algorithms such as missing transverse energy (MET), centrality for heavy ion collisions, and ``jets without jets.'' The gFEX will use 4 processor FPGAs to perform calculations on the incoming data and a Hybrid APU-FPGA for slow control of the module. The gFEX is unique in both design and implementation and substantially enhance the selectivity of the L1 trigger and increases sensitivity to key physics channels.
2011-02-01
Heating, Ventilation, Air Conditioning (HVAC) system to environmentally control the HPA Room as well as a Mechanical Room to house the new diesel ...Rickie D. Moon, Senior Systems Engineer MS, Environmental Management, Samford University BS, Chemistry and Mathematics, Samford University 28...Huntsville 16 LPES, Inc. Timothy Lavallee, PE, Principal/Senior Engineer BS, Mechanical Engineering , Northeastern University MS, Civil and
2-D Acousto-Optic Signal Processors for Simultaneous Spectrum Analysis and Direction Finding
1990-11-01
National Dfense Defence nationale 2-D ACOUSTO - OPTIC SIGNAL PROCESSORS FOR SIMULTANEOUS SPECTRUM ANALYSIS 00 AND DIRECTION FINDING (U) by NM Jim P.Y...Wr pdft .1w I0~1111191 3 05089 National DIfense Defence nationale 2-D ACOUSTO - OPTIC SIGNAL PROCESSORS FOR SIMULTANEOUS SPECTRUM ANALYSIS AND DIRECTION...Processing, J.T. Tippet et al., Eds., Chapter 38, pp. 715-748, MIT Press, Cambridge 1965. [6] A.E. Spezio," Acousto - optics for Electronic Warfare
NASA Astrophysics Data System (ADS)
Selker, Ted
1983-05-01
Lens focusing using a hardware model of a retina (Reticon RL256 light sensitive array) with a low cost processor (8085 with 512 bytes of ROM and 512 bytes of RAM) was built. This system was developed and tested on a variety of visual stimuli to demonstrate that: a)an algorithm which moves a lens to maximize the sum of the difference of light level on adjacent light sensors will converge to best focus in all but contrived situations. This is a simpler algorithm than any previously suggested; b) it is feasible to use unmodified video sensor arrays with in-expensive processors to aid video camera use. In the future, software could be developed to extend the processor's usefulness, possibly to track an actor by panning and zooming to give a earners operator increased ease of framing; c) lateral inhibition is an adequate basis for determining best focus. This supports a simple anatomically motivated model of how our brain focuses our eyes.
NASA Astrophysics Data System (ADS)
Narayan, Ramesh; Zhu, Yucong; Psaltis, Dimitrios; Saḑowski, Aleksander
2016-03-01
We describe Hybrid Evaluator for Radiative Objects Including Comptonization (HEROIC), an upgraded version of the relativistic radiative post-processor code HERO described in a previous paper, but which now Includes Comptonization. HEROIC models Comptonization via the Kompaneets equation, using a quadratic approximation for the source function in a short characteristics radiation solver. It employs a simple form of accelerated lambda iteration to handle regions of high scattering opacity. In addition to solving for the radiation field, HEROIC also solves for the gas temperature by applying the condition of radiative equilibrium. We present benchmarks and tests of the Comptonization module in HEROIC with simple 1D and 3D scattering problems. We also test the ability of the code to handle various relativistic effects using model atmospheres and accretion flows in a black hole space-time. We present two applications of HEROIC to general relativistic magnetohydrodynamics simulations of accretion discs. One application is to a thin accretion disc around a black hole. We find that the gas below the photosphere in the multidimensional HEROIC solution is nearly isothermal, quite different from previous solutions based on 1D plane parallel atmospheres. The second application is to a geometrically thick radiation-dominated accretion disc accreting at 11 times the Eddington rate. Here, the multidimensional HEROIC solution shows that, for observers who are on axis and look down the polar funnel, the isotropic equivalent luminosity could be more than 10 times the Eddington limit, even though the spectrum might still look thermal and show no signs of relativistic beaming.
Single-chip microcomputer for image processing in the photonic measuring system
NASA Astrophysics Data System (ADS)
Smoleva, Olga S.; Ljul, Natalia Y.
2002-04-01
The non-contact measuring system has been designed for rail- track parameters control on the Moscow Metro. It detects some significant parameters: rail-track width, rail-track height, gage, rail-slums, crosslevel, pickets, and car speed. The system consists of three subsystems: non-contact system of rail-track width, height, and gage inspection, non-contact system of rail-slums inspection and subsystem for crosslevel, speed, and pickets detection. Data from subsystems is transferred to pre-processing unit. In order to process data received from subsystems, the single-chip signal processor ADSP-2185 must be used due to providing required processing speed. After data will be processed, it is send to PC, which processes it and outputs it in the readable form.
Module and electronics developments for the ATLAS ITk pixel system
NASA Astrophysics Data System (ADS)
Muñoz, F. J.
2018-03-01
The ATLAS experiment is preparing for an extensive modification of its detectors in the course of the planned HL-LHC accelerator upgrade around 2025. The ATLAS upgrade includes the replacement of the entire tracking system by an all-silicon detector (Inner Tracker, ITk). The five innermost layers of ITk will be a pixel detector built of new sensor and readout electronics technologies to improve the tracking performance and cope with the severe HL-LHC environment in terms of occupancy and radiation. The total area of the new pixel system could measure up to 14 m2, depending on the final layout choice, which is expected to take place in 2018. In this paper an overview of the ongoing R&D activities on modules and electronics for the ATLAS ITk is given including the main developments and achievements in silicon planar and 3D sensor technologies, readout and power challenges.
Phase-locked tracking loops for LORAN-C
NASA Technical Reports Server (NTRS)
Burhans, R. W.
1978-01-01
Portable battery operated LORAN-C receivers were fabricated to evaluate simple envelope detector methods with hybrid analog to digital phase locked loop sensor processors. The receivers are used to evaluate LORAN-C in general aviation applications. Complete circuit details are given for the experimental sensor and readout system.
Get a winning Oracle upgrade session using the quarterback approach
NASA Technical Reports Server (NTRS)
Anderson, G.
2002-01-01
Upgrades, upgrades... too much customer down time. Find out how we shrunk our production upgrade schedule 40% from our estimate of 10 days 12 hours to 6 days 2 hours using the quarterback approach. So your upgrade is not that complex, come anyway. This approach is scalable to any size project and will be extremely valuable.
Parallel machine architecture for production rule systems
Allen, Jr., John D.; Butler, Philip L.
1989-01-01
A parallel processing system for production rule programs utilizes a host processor for storing production rule right hand sides (RHS) and a plurality of rule processors for storing left hand sides (LHS). The rule processors operate in parallel in the recognize phase of the system recognize -Act Cycle to match their respective LHS's against a stored list of working memory elements (WME) in order to find a self consistent set of WME's. The list of WME is dynamically varied during the Act phase of the system in which the host executes or fires rule RHS's for those rules for which a self-consistent set has been found by the rule processors. The host transmits instructions for creating or deleting working memory elements as dictated by the rule firings until the rule processors are unable to find any further self-consistent working memory element sets at which time the production rule system is halted.
Pixel sensors with slim edges and small pitches for the CMS upgrades for HL-LHC
Vernieri, Caterina; Bolla, Gino; Rivera, Ryan; ...
2016-06-07
Here, planar n-in-n silicon detectors with small pitches and slim edges are being investigated for the innermost layers of tracking devices for the foreseen upgrades of the LHC experiments. Sensor prototypes compatible with the CMS readout, fabricated by Sintef, were tested in the laboratory and with a 120 GeV/c proton beam at the Fermilab test beam facility before and after irradiation with up to 2 × 10 15 neq/cm 2 fluence. Preliminary results of the data analysis are presented.
Permanent magnet synchronous motor servo system control based on μC/OS
NASA Astrophysics Data System (ADS)
Shi, Chongyang; Chen, Kele; Chen, Xinglong
2015-10-01
When Opto-Electronic Tracking system operates in complex environments, every subsystem must operate efficiently and stably. As a important part of Opto-Electronic Tracking system, the performance of PMSM(Permanent Magnet Synchronous Motor) servo system affects the Opto-Electronic Tracking system's accuracy and speed greatly[1][2]. This paper applied embedded real-time operating system μC/OS to the control of PMSM servo system, implemented SVPWM(Space Vector Pulse Width Modulation) algorithm in PMSM servo system, optimized the stability of PMSM servo system. Pointing on the characteristics of the Opto-Electronic Tracking system, this paper expanded μC/OS with software redundancy processes, remote debugging and upgrading. As a result, the Opto- Electronic Tracking system performs efficiently and stably.
Entwicklungsarbeit am Spurendetektor fur das CDF Experiment am Tevatron (in German/English)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartmann, Frank
2000-02-01
Silicon, the element, which revolutionized the development of electronics, is known as an important and multiusable material, dominating todays electronic technology. It's properties are well investigated and today well known. Silicon is used in solar cells, computers and telecommunications. Since the Sixties semiconductors have been used as particle detectors. Initially they were operated in fixed- target experiments as calorimeters and as detectors with a high precision track reconstruction. Since the Eighties they are widely used in collider experiments as silicon microstrip or silicon pixel detectors near the primary vertex. Silicon sensors have a very good intrinsic energy resolution: for everymore » 3.6 eV released by a particle crossing the medium, one electron-hole pair is produced. Compared to 30 eV required to ionize a gas molecule in a gaseous detector, one gets 10 times the number of particles. The average energy loss and high ionized particle number with 390 e V / μm ~ 108 (electron - hole pairs)/ μm is effectively high due to the high density of silicon. These detectors allow a high precision reconstruction of tracks, primary and secondary vertices, which are especially important for b flavour tagging. The Tevatron and its detectors are being upgraded for the next data taking run starting in 2001 (RUN II). The Collider Detector at Fermilab (CDF) [2] for the upcoming Run II and its upgraded components are described in chapter 2. The main upgrade project is the design and construction of a completely new inner tracking system.« less
75 FR 12575 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-16
... licensed pursuant to 10 CFR Part 61 or equivalent Agreement State regulations. All generators, collectors... processors, contains information which facilitates tracking the identity of the waste generator. That... generators. The information provided on NRC Form 542 permits the States and Compacts to know the original...
Speech Intelligibility in Various Noise Conditions with the Nucleus® 5 CP810 Sound Processor.
Dillier, Norbert; Lai, Wai Kong
2015-06-11
The Nucleus(®) 5 System Sound Processor (CP810, Cochlear™, Macquarie University, NSW, Australia) contains two omnidirectional microphones. They can be configured as a fixed directional microphone combination (called Zoom) or as an adaptive beamformer (called Beam), which adjusts the directivity continuously to maximally reduce the interfering noise. Initial evaluation studies with the CP810 had compared performance and usability of the new processor in comparison with the Freedom™ Sound Processor (Cochlear™) for speech in quiet and noise for a subset of the processing options. This study compares the two processing options suggested to be used in noisy environments, Zoom and Beam, for various sound field conditions using a standardized speech in noise matrix test (Oldenburg sentences test). Nine German-speaking subjects who previously had been using the Freedom speech processor and subsequently were upgraded to the CP810 device participated in this series of additional evaluation tests. The speech reception threshold (SRT for 50% speech intelligibility in noise) was determined using sentences presented via loudspeaker at 65 dB SPL in front of the listener and noise presented either via the same loudspeaker (S0N0) or at 90 degrees at either the ear with the sound processor (S0NCI+) or the opposite unaided ear (S0NCI-). The fourth noise condition consisted of three uncorrelated noise sources placed at 90, 180 and 270 degrees. The noise level was adjusted through an adaptive procedure to yield a signal to noise ratio where 50% of the words in the sentences were correctly understood. In spatially separated speech and noise conditions both Zoom and Beam could improve the SRT significantly. For single noise sources, either ipsilateral or contralateral to the cochlear implant sound processor, average improvements with Beam of 12.9 and 7.9 dB in SRT were found. The average SRT of -8 dB for Beam in the diffuse noise condition (uncorrelated noise from both sides and back) is truly remarkable and comparable to the performance of normal hearing listeners in the same test environment. The static directivity (Zoom) option in the diffuse noise condition still provides a significant benefit of 5.9 dB in comparison with the standard omnidirectional microphone setting. These results indicate that CI recipients may improve their speech recognition in noisy environments significantly using these directional microphone-processing options.
Common Readout Unit (CRU) - A new readout architecture for the ALICE experiment
NASA Astrophysics Data System (ADS)
Mitra, J.; Khan, S. A.; Mukherjee, S.; Paul, R.
2016-03-01
The ALICE experiment at the CERN Large Hadron Collider (LHC) is presently going for a major upgrade in order to fully exploit the scientific potential of the upcoming high luminosity run, scheduled to start in the year 2021. The high interaction rate and the large event size will result in an experimental data flow of about 1 TB/s from the detectors, which need to be processed before sending to the online computing system and data storage. This processing is done in a dedicated Common Readout Unit (CRU), proposed for data aggregation, trigger and timing distribution and control moderation. It act as common interface between sub-detector electronic systems, computing system and trigger processors. The interface links include GBT, TTC-PON and PCIe. GBT (Gigabit transceiver) is used for detector data payload transmission and fixed latency path for trigger distribution between CRU and detector readout electronics. TTC-PON (Timing, Trigger and Control via Passive Optical Network) is employed for time multiplex trigger distribution between CRU and Central Trigger Processor (CTP). PCIe (Peripheral Component Interconnect Express) is the high-speed serial computer expansion bus standard for bulk data transport between CRU boards and processors. In this article, we give an overview of CRU architecture in ALICE, discuss the different interfaces, along with the firmware design and implementation of CRU on the LHCb PCIe40 board.
Three Generations of FPGA DAQ Development for the ATLAS Pixel Detector
NASA Astrophysics Data System (ADS)
Mayer, Joseph A., II
The Large Hadron Collider (LHC) at the European Center for Nuclear Research (CERN) tracks a schedule of long physics runs, followed by periods of inactivity known as Long Shutdowns (LS). During these LS phases both the LHC, and the experiments around its ring, undergo maintenance and upgrades. For the LHC these upgrades improve their ability to create data for physicists; the more data the LHC can create the more opportunities there are for rare events to appear that physicists will be interested in. The experiments upgrade so they can record the data and ensure the event won't be missed. Currently the LHC is in Run 2 having completed the first LS of three. This thesis focuses on the development of Field-Programmable Gate Array (FPGA)-based readout systems that span across three major tasks of the ATLAS Pixel data acquisition (DAQ) system. The evolution of Pixel DAQ's Readout Driver (ROD) card is presented. Starting from improvements made to the new Insertable B-Layer (IBL) ROD design, which was part of the LS1 upgrade; to upgrading the old RODs from Run 1 to help them run more efficiently in Run 2. It also includes the research and development of FPGA based DAQs and integrated circuit emulators for the ITk upgrade which will occur during LS3 in 2025.
Testbeam results of irradiated ams H18 HV-CMOS pixel sensor prototypes
NASA Astrophysics Data System (ADS)
Benoit, M.; Braccini, S.; Casse, G.; Chen, H.; Chen, K.; Di Bello, F. A.; Ferrere, D.; Golling, T.; Gonzalez-Sevilla, S.; Iacobucci, G.; Kiehn, M.; Lanni, F.; Liu, H.; Meng, L.; Merlassino, C.; Miucci, A.; Muenstermann, D.; Nessi, M.; Okawa, H.; Perić, I.; Rimoldi, M.; Ristić, B.; Barrero Pinto, M. Vicente; Vossebeld, J.; Weber, M.; Weston, T.; Wu, W.; Xu, L.; Zaffaroni, E.
2018-02-01
HV-CMOS pixel sensors are a promising option for the tracker upgrade of the ATLAS experiment at the LHC, as well as for other future tracking applications in which large areas are to be instrumented with radiation-tolerant silicon pixel sensors. We present results of testbeam characterisations of the 4th generation of Capacitively Coupled Pixel Detectors (CCPDv4) produced with the ams H18 HV-CMOS process that have been irradiated with different particles (reactor neutrons and 18 MeV protons) to fluences between 1× 1014 and 5× 1015 1-MeV- neq. The sensors were glued to ATLAS FE-I4 pixel readout chips and measured at the CERN SPS H8 beamline using the FE-I4 beam telescope. Results for all fluences are very encouraging with all hit efficiencies being better than 97% for bias voltages of 85 V. The sample irradiated to a fluence of 1× 1015 neq—a relevant value for a large volume of the upgraded tracker—exhibited 99.7% average hit efficiency. The results give strong evidence for the radiation tolerance of HV-CMOS sensors and their suitability as sensors for the experimental HL-LHC upgrades and future large-area silicon-based tracking detectors in high-radiation environments.
Readout of the upgraded ALICE-ITS
NASA Astrophysics Data System (ADS)
Szczepankiewicz, A.; ALICE Collaboration
2016-07-01
The ALICE experiment will undergo a major upgrade during the second long shutdown of the CERN LHC. As part of this program, the present Inner Tracking System (ITS), which employs different layers of hybrid pixels, silicon drift and strip detectors, will be replaced by a completely new tracker composed of seven layers of monolithic active pixel sensors. The upgraded ITS will have more than twelve billion pixels in total, producing 300 Gbit/s of data when tracking 50 kHz Pb-Pb events. Two families of pixel chips realized with the TowerJazz CMOS imaging process have been developed as candidate sensors: the ALPIDE, which uses a proprietary readout and sparsification mechanism and the MISTRAL-O, based on a proven rolling shutter architecture. Both chips can operate in continuous mode, with the ALPIDE also supporting triggered operations. As the communication IP blocks are shared among the two chip families, it has been possible to develop a common Readout Electronics. All the sensor components (analog stages, state machines, buffers, FIFOs, etc.) have been modelled in a system level simulation, which has been extensively used to optimize both the sensor and the whole readout chain design in an iterative process. This contribution covers the progress of the R&D efforts and the overall expected performance of the ALICE-ITS readout system.
Deployment of the Hobby-Eberly Telescope wide-field upgrade
NASA Astrophysics Data System (ADS)
Hill, Gary J.; Drory, Niv; Good, John M.; Lee, Hanshin; Vattiat, Brian L.; Kriel, Herman; Ramsey, Jason; Bryant, Randy; Elliot, Linda; Fowler, Jim; Häuser, Marco; Landiau, Martin; Leck, Ron; Odewahn, Stephen; Perry, Dave; Savage, Richard; Schroeder Mrozinski, Emily; Shetrone, Matthew; DePoy, D. L.; Prochaska, Travis; Marshall, J. L.; Damm, George; Gebhardt, Karl; MacQueen, Phillip J.; Martin, Jerry; Armandroff, Taft; Ramsey, Lawrence W.
2016-07-01
The Hobby-Eberly Telescope (HET) is an innovative large telescope, located in West Texas at the McDonald Observatory. The HET operates with a fixed segmented primary and has a tracker, which moves the four-mirror corrector and prime focus instrument package to track the sidereal and non-sidereal motions of objects. We have completed a major multi-year upgrade of the HET that has substantially increased the pupil size to 10 meters and the field of view to 22 arcminutes by replacing the corrector, tracker, and prime focus instrument package. The new wide field HET will feed the revolutionary integral field spectrograph called VIRUS, in support of the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX§), a new low resolution spectrograph (LRS2), an upgraded high resolution spectrograph (HRS2), and later the Habitable Zone Planet Finder (HPF). The upgrade is being commissioned and this paper discusses the completion of the installation, the commissioning process and the performance of the new HET.
Electrically tunable lens speeds up 3D orbital tracking
Annibale, Paolo; Dvornikov, Alexander; Gratton, Enrico
2015-01-01
3D orbital particle tracking is a versatile and effective microscopy technique that allows following fast moving fluorescent objects within living cells and reconstructing complex 3D shapes using laser scanning microscopes. We demonstrated notable improvements in the range, speed and accuracy of 3D orbital particle tracking by replacing commonly used piezoelectric stages with Electrically Tunable Lens (ETL) that eliminates mechanical movement of objective lenses. This allowed tracking and reconstructing shape of structures extending 500 microns in the axial direction. Using the ETL, we tracked at high speed fluorescently labeled genomic loci within the nucleus of living cells with unprecedented temporal resolution of 8ms using a 1.42NA oil-immersion objective. The presented technology is cost effective and allows easy upgrade of scanning microscopes for fast 3D orbital tracking. PMID:26114037
Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model
Fu, Changhong; Duan, Ran; Kircali, Dogan; Kayacan, Erdal
2016-01-01
In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV) applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF), which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application), which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy. PMID:27589769
KU-Band rendezvous radar performance computer simulation model
NASA Technical Reports Server (NTRS)
Griffin, J. W.
1980-01-01
The preparation of a real time computer simulation model of the KU band rendezvous radar to be integrated into the shuttle mission simulator (SMS), the shuttle engineering simulator (SES), and the shuttle avionics integration laboratory (SAIL) simulator is described. To meet crew training requirements a radar tracking performance model, and a target modeling method were developed. The parent simulation/radar simulation interface requirements, and the method selected to model target scattering properties, including an application of this method to the SPAS spacecraft are described. The radar search and acquisition mode performance model and the radar track mode signal processor model are examined and analyzed. The angle, angle rate, range, and range rate tracking loops are also discussed.
Reducing Interprocessor Dependence in Recoverable Distributed Shared Memory
NASA Technical Reports Server (NTRS)
Janssens, Bob; Fuchs, W. Kent
1994-01-01
Checkpointing techniques in parallel systems use dependency tracking and/or message logging to ensure that a system rolls back to a consistent state. Traditional dependency tracking in distributed shared memory (DSM) systems is expensive because of high communication frequency. In this paper we show that, if designed correctly, a DSM system only needs to consider dependencies due to the transfer of blocks of data, resulting in reduced dependency tracking overhead and reduced potential for rollback propagation. We develop an ownership timestamp scheme to tolerate the loss of block state information and develop a passive server model of execution where interactions between processors are considered atomic. With our scheme, dependencies are significantly reduced compared to the traditional message-passing model.
NASA Astrophysics Data System (ADS)
Webster, Jordan
2017-01-01
Dense track environments in pp collisions at the Large Hadron Collider (LHC) motivate the use of triggers with dedicated hardware for fast track reconstruction. The ATLAS Collaboration is in the process of implementing a Fast Tracker (FTK) trigger upgrade, in which Content Addressable Memories (CAMs) will be used to rapidly match hit patterns with large banks of simulated tracks. The FTK CAMs are produced primarily at the University of Pisa. However, commercial CAM technology is rapidly developing due to applications in computer networking devices. This poster presents new studies comparing FTK CAMs to cutting-edge ternary CAMs developed by Cavium. The comparison is intended to guide the design of future track-based trigger systems for the next Phase at the LHC.
NASA Technical Reports Server (NTRS)
Carter, Donald Layne
2017-01-01
The ISS WRS produces potable water from crew urine, crew latent, and Sabatier product water. This system has been operational on ISS since November 2008, producing over 30,000 L of water during that time. The WRS includes a Urine Processor Assembly (UPA) to produce a distillate from the crew urine. This distillate is combined with the crew latent and Sabatier product water and further processed by the Water Processor Assembly (WPA) to the potable water. The UPA and WPA use technologies commonly used on ISS for water purification, including filtration, distillation, adsorption, ion exchange, and catalytic oxidation. The primary challenge with the design and operation of the WRS has been with implementing these technologies in microgravity. The absence of gravity has created unique issues that impact the constituency of the waste streams, alter two-phase fluid dynamics, and increases the impact of particulates on system performance. NASA personnel continue to pursue upgrades to the existing design to improve reliability while also addressing their viability for missions beyond ISS.
CryoSat Ice Processor: Known Processor Anomalies and Potential Future Product Evolutions
NASA Astrophysics Data System (ADS)
Mannan, R.; Webb, E.; Hall, A.; Bouffard, J.; Femenias, P.; Parrinello, T.; Bouffard, J.; Brockley, D.; Baker, S.; Scagliola, M.; Urien, S.
2016-08-01
Launched in 2010, CryoSat was designed to measure changes in polar sea ice thickness and ice sheet elevation. To reach this goal the CryoSat data products have to meet the highest performance standards and are subjected to a continual cycle of improvement achieved through upgrades to the Instrument Processing Facilities (IPFs). Following the switch to the Baseline-C Ice IPFs there are already planned evolutions for the next processing Baseline, based on recommendations from the Scientific Community, Expert Support Laboratory (ESL), Quality Control (QC) Centres and Validation campaigns. Some of the proposed evolutions, to be discussed with the scientific community, include the activation of freeboard computation in SARin mode, the potential operation of SARin mode over flat-to-slope transitory land ice areas, further tuning of the land ice retracker, the switch to NetCDF format and the resolution of anomalies arising in Baseline-C. This paper describes some of the anomalies known to affect Baseline-C in addition to potential evolutions that are planned and foreseen for Baseline-D.
NASA Technical Reports Server (NTRS)
Pordes, Ruth (Editor)
1989-01-01
Papers on real-time computer applications in nuclear, particle, and plasma physics are presented, covering topics such as expert systems tactics in testing FASTBUS segment interconnect modules, trigger control in a high energy physcis experiment, the FASTBUS read-out system for the Aleph time projection chamber, a multiprocessor data acquisition systems, DAQ software architecture for Aleph, a VME multiprocessor system for plasma control at the JT-60 upgrade, and a multiasking, multisinked, multiprocessor data acquisition front end. Other topics include real-time data reduction using a microVAX processor, a transputer based coprocessor for VEDAS, simulation of a macropipelined multi-CPU event processor for use in FASTBUS, a distributed VME control system for the LISA superconducting Linac, a distributed system for laboratory process automation, and a distributed system for laboratory process automation. Additional topics include a structure macro assembler for the event handler, a data acquisition and control system for Thomson scattering on ATF, remote procedure execution software for distributed systems, and a PC-based graphic display real-time particle beam uniformity.
Real-time lens distortion correction: speed, accuracy and efficiency
NASA Astrophysics Data System (ADS)
Bax, Michael R.; Shahidi, Ramin
2014-11-01
Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.
Monitoring Data-Structure Evolution in Distributed Message-Passing Programs
NASA Technical Reports Server (NTRS)
Sarukkai, Sekhar R.; Beers, Andrew; Woodrow, Thomas S. (Technical Monitor)
1996-01-01
Monitoring the evolution of data structures in parallel and distributed programs, is critical for debugging its semantics and performance. However, the current state-of-art in tracking and presenting data-structure information on parallel and distributed environments is cumbersome and does not scale. In this paper we present a methodology that automatically tracks memory bindings (not the actual contents) of static and dynamic data-structures of message-passing C programs, using PVM. With the help of a number of examples we show that in addition to determining the impact of memory allocation overheads on program performance, graphical views can help in debugging the semantics of program execution. Scalable animations of virtual address bindings of source-level data-structures are used for debugging the semantics of parallel programs across all processors. In conjunction with light-weight core-files, this technique can be used to complement traditional debuggers on single processors. Detailed information (such as data-structure contents), on specific nodes, can be determined using traditional debuggers after the data structure evolution leading to the semantic error is observed graphically.
Instructional image processing on a university mainframe: The Kansas system
NASA Technical Reports Server (NTRS)
Williams, T. H. L.; Siebert, J.; Gunn, C.
1981-01-01
An interactive digital image processing program package was developed that runs on the University of Kansas central computer, a Honeywell Level 66 multi-processor system. The module form of the package allows easy and rapid upgrades and extensions of the system and is used in remote sensing courses in the Department of Geography, in regional five-day short courses for academics and professionals, and also in remote sensing projects and research. The package comprises three self-contained modules of processing functions: Subimage extraction and rectification; image enhancement, preprocessing and data reduction; and classification. Its use in a typical course setting is described. Availability and costs are considered.
An overview of autonomous rendezvous and docking system technology development at General Dynamics
NASA Technical Reports Server (NTRS)
Kuenzel, Fred
1991-01-01
The Centaur avionics suite is undergoing a dramatic modernization for the commercial, DoD Atlas and Titan programs. The system has been upgraded to the current state-of-the-art in ring laser gyro inertial sensors and Mil-Std-1750A processor technology. The Cruise Missile avionic system has similarly been evolving for many years. Integration of GPS into both systems has been underway for over five years with a follow-on cruise missile system currently in flight test. Rendezvous and Docking related studies have been conducted for over five years in support of OMV, CTV, and Advanced Upper Stages, as well as several other internal IR&D's. The avionics system and AR&D simulator demonstrated to the SATWG in November 1990 has been upgraded considerably under two IR&D programs in 1991. The Centaur modern avionics system is being flown in block upgrades which started in July of 1990. The Inertial Navigation Unit will fly in November of 1991. The Cruise Missile avionics systems have been fully tested and operationally validated in combat. The integrated AR&D system for space vehicle applications has been under development and testing since 1990. A Joint NASA / GD ARD&L System Test Program is currently being planned to validate several aspects of system performance in three different NASA test facilities in 1992.
NASA Astrophysics Data System (ADS)
Lockyer, Nigel S.
1998-02-01
This paper reports on the CDF-II B physics goals and new detector systems presently being built for Run-II of the Tevatron collider in the year 2000. The B physics goals are focused towards observing and studying CP violation and B s flavor oscillations. Estimates of expected performance are reported. The new detector systems described are: the 5-layer 3-D silicon vertex detector, the intermedia silicon tracking layers, the central tracking drift chamber, muon system upgrades, and a proposed time-of-flight system.
Hybrid-PIC Computer Simulation of the Plasma and Erosion Processes in Hall Thrusters
NASA Technical Reports Server (NTRS)
Hofer, Richard R.; Katz, Ira; Mikellides, Ioannis G.; Gamero-Castano, Manuel
2010-01-01
HPHall software simulates and tracks the time-dependent evolution of the plasma and erosion processes in the discharge chamber and near-field plume of Hall thrusters. HPHall is an axisymmetric solver that employs a hybrid fluid/particle-in-cell (Hybrid-PIC) numerical approach. HPHall, originally developed by MIT in 1998, was upgraded to HPHall-2 by the Polytechnic University of Madrid in 2006. The Jet Propulsion Laboratory has continued the development of HPHall-2 through upgrades to the physical models employed in the code, and the addition of entirely new ones. Primary among these are the inclusion of a three-region electron mobility model that more accurately depicts the cross-field electron transport, and the development of an erosion sub-model that allows for the tracking of the erosion of the discharge chamber wall. The code is being developed to provide NASA science missions with a predictive tool of Hall thruster performance and lifetime that can be used to validate Hall thrusters for missions.
Highly Survivable Avionics Systems for Long-Term Deep Space Exploration
NASA Technical Reports Server (NTRS)
Alkalai, L.; Chau, S.; Tai, A. T.
2001-01-01
The design of highly survivable avionics systems for long-term (> 10 years) exploration of space is an essential technology for all current and future missions in the Outer Planets roadmap. Long-term exposure to extreme environmental conditions such as high radiation and low-temperatures make survivability in space a major challenge. Moreover, current and future missions are increasingly using commercial technology such as deep sub-micron (0.25 microns) fabrication processes with specialized circuit designs, commercial interfaces, processors, memory, and other commercial off the shelf components that were not designed for long-term survivability in space. Therefore, the design of highly reliable, and available systems for the exploration of Europa, Pluto and other destinations in deep-space require a comprehensive and fresh approach to this problem. This paper summarizes work in progress in three different areas: a framework for the design of highly reliable and highly available space avionics systems, distributed reliable computing architecture, and Guarded Software Upgrading (GSU) techniques for software upgrading during long-term missions. Additional information is contained in the original extended abstract.
First Results from a Hardware-in-the-Loop Demonstration of Closed-Loop Autonomous Formation Flying
NASA Technical Reports Server (NTRS)
Gill, E.; Naasz, Bo; Ebinuma, T.
2003-01-01
A closed-loop system for the demonstration of autonomous satellite formation flying technologies using hardware-in-the-loop has been developed. Making use of a GPS signal simulator with a dual radio frequency outlet, the system includes two GPS space receivers as well as a powerful onboard navigation processor dedicated to the GPS-based guidance, navigation, and control of a satellite formation in real-time. The closed-loop system allows realistic simulations of autonomous formation flying scenarios, enabling research in the fields of tracking and orbit control strategies for a wide range of applications. The autonomous closed-loop formation acquisition and keeping strategy is based on Lyapunov's direct control method as applied to the standard set of Keplerian elements. This approach not only assures global and asymptotic stability of the control but also maintains valuable physical insight into the applied control vectors. Furthermore, the approach can account for system uncertainties and effectively avoids a computationally expensive solution of the two point boundary problem, which renders the concept particularly attractive for implementation in onboard processors. A guidance law has been developed which strictly separates the relative from the absolute motion, thus avoiding the numerical integration of a target trajectory in the onboard processor. Moreover, upon using precise kinematic relative GPS solutions, a dynamical modeling or filtering is avoided which provides for an efficient implementation of the process on an onboard processor. A sample formation flying scenario has been created aiming at the autonomous transition of a Low Earth Orbit satellite formation from an initial along-track separation of 800 m to a target distance of 100 m. Assuming a low-thrust actuator which may be accommodated on a small satellite, a typical control accuracy of less than 5 m has been achieved which proves the applicability of autonomous formation flying techniques to formations of satellites as close as 50 m.
NASA Astrophysics Data System (ADS)
Castro, Andrew; Alice-Usa Collaboration; Alice-Tpc Collaboration
2017-09-01
The Time Projection Chamber (TPC) currently used for ALICE (A Large Ion Collider Experiment at CERN) is a gaseous tracking detector used to study both proton-proton and heavy-ion collisions at the Large Hadron Collider (LHC) In order to accommodate the higher luminosit collisions planned for the LHC Run-3 starting in 2021, the ALICE-TPC will undergo a major upgrade during the next LHC shut down. The TPC is limited to a read out of 1000 Hz in minimum bias events due to the intrinsic dead time associated with back ion flow in the multi wire proportional chambers (MWPC) in the TPC. The TPC upgrade will handle the increase in event readout to 50 kHz for heavy ion minimum bias triggered events expected with the Run-3 luminosity by switching the MWPCs to a stack of four Gaseous Electron Multiplier (GEM) foils. The GEM layers will combine different hole pitches to reduce the dead time while maintaining the current spatial and energy resolution of the existing TPC. Undertaking the upgrade of the TPC represents a massive endeavor in terms of design, production, construction, quality assurance, and installation, thus the upgrade is coordinated over a number of institutes worldwide. The talk will go over the physics motivation for the upgrade, the ALICE-USA contribution to the construction of Inner Read Out Chambers IROCs, and QA from the first chambers built in the U.S
Overview of SCIAMACHY validation: 2002-2004
NASA Astrophysics Data System (ADS)
Piters, A. J. M.; Bramstedt, K.; Lambert, J.-C.; Kirchhoff, B.
2006-01-01
SCIAMACHY, on board Envisat, has been in operation now for almost three years. This UV/visible/NIR spectrometer measures the solar irradiance, the earthshine radiance scattered at nadir and from the limb, and the attenuation of solar radiation by the atmosphere during sunrise and sunset, from 240 to 2380 nm and at moderate spectral resolution. Vertical columns and profiles of a variety of atmospheric constituents are inferred from the SCIAMACHY radiometric measurements by dedicated retrieval algorithms. With the support of ESA and several international partners, a methodical SCIAMACHY validation programme has been developed jointly by Germany, the Netherlands and Belgium (the three instrument providing countries) to face complex requirements in terms of measured species, altitude range, spatial and temporal scales, geophysical states and intended scientific applications. This summary paper describes the approach adopted to address those requirements.
Since provisional releases of limited data sets in summer 2002, operational SCIAMACHY processors established at DLR on behalf of ESA were upgraded regularly and some data products - level-1b spectra, level-2 O3, NO2, BrO and clouds data - have improved significantly. Validation results summarised in this paper and also reported in this special issue conclude that for limited periods and geographical domains they can already be used for atmospheric research. Nevertheless, current processor versions still experience known limitations that hamper scientific usability in other periods and domains. Free from the constraints of operational processing, seven scientific institutes (BIRA-IASB, IFE/IUP-Bremen, IUP-Heidelberg, KNMI, MPI, SAO and SRON) have developed their own retrieval algorithms and generated SCIAMACHY data products, together addressing nearly all targeted constituents. Most of the UV-visible data products - O3, NO2, SO2, H2O total columns; BrO, OClO slant columns; O3, NO2, BrO profiles - already have acceptable, if not excellent, quality. Provisional near-infrared column products - CO, CH4, N2O and CO2 - have already demonstrated their potential for a variety of applications. Cloud and aerosol parameters are retrieved, suffering from calibration with the exception of cloud cover. In any case, scientific users are advised to read carefully validation reports before using the data. It is required and anticipated that SCIAMACHY validation will continue throughout instrument lifetime and beyond and will accompany regular processor upgrades.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
Real-time acquisition and tracking system with multiple Kalman filters
NASA Astrophysics Data System (ADS)
Beard, Gary C.; McCarter, Timothy G.; Spodeck, Walter; Fletcher, James E.
1994-07-01
The design of a real-time, ground-based, infrared tracking system with proven field success in tracking boost vehicles through burnout is presented with emphasis on the software design. The system was originally developed to deliver relative angular positions during boost, and thrust termination time to a sensor fusion station in real-time. Autonomous target acquisition and angle-only tracking features were developed to ensure success under stressing conditions. A unique feature of the system is the incorporation of multiple copies of a Kalman filter tracking algorithm running in parallel in order to minimize run-time. The system is capable of updating the state vector for an object at measurement rates approaching 90 Hz. This paper will address the top-level software design, details of the algorithms employed, system performance history in the field, and possible future upgrades.
Testbeam results of irradiated ams H18 HV-CMOS pixel sensor prototypes
Benoit, M.; Braccini, S.; Casse, G.; ...
2018-02-08
HV-CMOS pixel sensors are a promising option for the tracker upgrade of the ATLAS experiment at the LHC, as well as for other future tracking applications in which large areas are to be instrumented with radiation-tolerant silicon pixel sensors. We present results of testbeam characterisations of the 4 th generation of Capacitively Coupled Pixel Detectors (CCPDv4) produced with the ams H18 HV-CMOS process that have been irradiated with different particles (reactor neutrons and 18 MeV protons) to fluences between 1×10 14 and 5×10 15 1–MeV– n eq. The sensors were glued to ATLAS FE-I4 pixel readout chips and measured atmore » the CERN SPS H8 beamline using the FE-I4 beam telescope. Results for all fluences are very encouraging with all hit efficiencies being better than 97% for bias voltages of 85 V. The sample irradiated to a fluence of 1×10 15 neq—a relevant value for a large volume of the upgraded tracker—exhibited 99.7% average hit efficiency. Furthermore, the results give strong evidence for the radiation tolerance of HV-CMOS sensors and their suitability as sensors for the experimental HL-LHC upgrades and future large-area silicon-based tracking detectors in high-radiation environments.« less
Testbeam results of irradiated ams H18 HV-CMOS pixel sensor prototypes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benoit, M.; Braccini, S.; Casse, G.
HV-CMOS pixel sensors are a promising option for the tracker upgrade of the ATLAS experiment at the LHC, as well as for other future tracking applications in which large areas are to be instrumented with radiation-tolerant silicon pixel sensors. We present results of testbeam characterisations of the 4 th generation of Capacitively Coupled Pixel Detectors (CCPDv4) produced with the ams H18 HV-CMOS process that have been irradiated with different particles (reactor neutrons and 18 MeV protons) to fluences between 1×10 14 and 5×10 15 1–MeV– n eq. The sensors were glued to ATLAS FE-I4 pixel readout chips and measured atmore » the CERN SPS H8 beamline using the FE-I4 beam telescope. Results for all fluences are very encouraging with all hit efficiencies being better than 97% for bias voltages of 85 V. The sample irradiated to a fluence of 1×10 15 neq—a relevant value for a large volume of the upgraded tracker—exhibited 99.7% average hit efficiency. Furthermore, the results give strong evidence for the radiation tolerance of HV-CMOS sensors and their suitability as sensors for the experimental HL-LHC upgrades and future large-area silicon-based tracking detectors in high-radiation environments.« less
1986-06-30
features of computer aided design systems and statistical quality control procedures that are generic to chip sets and processes. RADIATION HARDNESS -The...System PSP Programmable Signal Processor SSI Small Scale Integration ." TOW Tube Launched, Optically Tracked, Wire Guided TTL Transistor Transitor Logic
Partial Automated Alignment and Integration System
NASA Technical Reports Server (NTRS)
Kelley, Gary Wayne (Inventor)
2014-01-01
The present invention is a Partial Automated Alignment and Integration System (PAAIS) used to automate the alignment and integration of space vehicle components. A PAAIS includes ground support apparatuses, a track assembly with a plurality of energy-emitting components and an energy-receiving component containing a plurality of energy-receiving surfaces. Communication components and processors allow communication and feedback through PAAIS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Michael; Nemati, Bijan; Zhai, Chengxing
We present an approach that significantly increases the sensitivity for finding and tracking small and fast near-Earth asteroids (NEAs). This approach relies on a combined use of a new generation of high-speed cameras which allow short, high frame-rate exposures of moving objects, effectively 'freezing' their motion, and a computationally enhanced implementation of the 'shift-and-add' data processing technique that helps to improve the signal-to-noise ratio (SNR) for detection of NEAs. The SNR of a single short exposure of a dim NEA is insufficient to detect it in one frame, but by computationally searching for an appropriate velocity vector, shifting successive framesmore » relative to each other and then co-adding the shifted frames in post-processing, we synthetically create a long-exposure image as if the telescope were tracking the object. This approach, which we call 'synthetic tracking,' enhances the familiar shift-and-add technique with the ability to do a wide blind search, detect, and track dim and fast-moving NEAs in near real time. We discuss also how synthetic tracking improves the astrometry of fast-moving NEAs. We apply this technique to observations of two known asteroids conducted on the Palomar 200 inch telescope and demonstrate improved SNR and 10 fold improvement of astrometric precision over the traditional long-exposure approach. In the past 5 yr, about 150 NEAs with absolute magnitudes H = 28 (∼10 m in size) or fainter have been discovered. With an upgraded version of our camera and a field of view of (28 arcmin){sup 2} on the Palomar 200 inch telescope, synthetic tracking could allow detecting up to 180 such objects per night, including very small NEAs with sizes down to 7 m.« less
A z-Vertex Trigger for Belle II
NASA Astrophysics Data System (ADS)
Skambraks, S.; Abudinén, F.; Chen, Y.; Feindt, M.; Frühwirth, R.; Heck, M.; Kiesling, C.; Knoll, A.; Neuhaus, S.; Paul, S.; Schieck, J.
2015-08-01
The Belle II experiment will go into operation at the upgraded SuperKEKB collider in 2016. SuperKEKB is designed to deliver an instantaneous luminosity L = 8 ×1035 cm - 2 s - 1. The experiment will therefore have to cope with a much larger machine background than its predecessor Belle, in particular from events outside of the interaction region. We present the concept of a track trigger, based on a neural network approach, that is able to suppress a large fraction of this background by reconstructing the z (longitudinal) position of the event vertex within the latency of the first level trigger. The trigger uses the hit information from the Central Drift Chamber (CDC) of Belle II within narrow cones in polar and azimuthal angle as well as in transverse momentum (“sectors”), and estimates the z-vertex without explicit track reconstruction. The preprocessing for the track trigger is based on the track information provided by the standard CDC trigger. It takes input from the 2D track finder, adds information from the stereo wires of the CDC, and finds the appropriate sectors in the CDC for each track. Within the sector, the z-vertex is estimated by a specialized neural network, with the drift times from the CDC as input and a continuous output corresponding to the scaled z-vertex. The neural algorithm will be implemented in programmable hardware. To this end a Virtex 7 FPGA board will be used, which provides at present the most promising solution for a fully parallelized implementation of neural networks or alternative multivariate methods. A high speed interface for external memory will be integrated into the platform, to be able to store the O(109) parameters required. The contribution presents the results of our feasibility studies and discusses the details of the envisaged hardware solution.
Infrared small target tracking based on SOPC
NASA Astrophysics Data System (ADS)
Hu, Taotao; Fan, Xiang; Zhang, Yu-Jin; Cheng, Zheng-dong; Zhu, Bin
2011-01-01
The paper presents a low cost FPGA based solution for a real-time infrared small target tracking system. A specialized architecture is presented based on a soft RISC processor capable of running kernel based mean shift tracking algorithm. Mean shift tracking algorithm is realized in NIOS II soft-core with SOPC (System on a Programmable Chip) technology. Though mean shift algorithm is widely used for target tracking, the original mean shift algorithm can not be directly used for infrared small target tracking. As infrared small target only has intensity information, so an improved mean shift algorithm is presented in this paper. How to describe target will determine whether target can be tracked by mean shift algorithm. Because color target can be tracked well by mean shift algorithm, imitating color image expression, spatial component and temporal component are advanced to describe target, which forms pseudo-color image. In order to improve the processing speed parallel technology and pipeline technology are taken. Two RAM are taken to stored images separately by ping-pong technology. A FLASH is used to store mass temp data. The experimental results show that infrared small target is tracked stably in complicated background.
Evaluation of the Sentinel-3 Hydrologic Altimetry Processor prototypE (SHAPE) methods.
NASA Astrophysics Data System (ADS)
Benveniste, J.; Garcia-Mondéjar, A.; Bercher, N.; Fabry, P. L.; Roca, M.; Varona, E.; Fernandes, J.; Lazaro, C.; Vieira, T.; David, G.; Restano, M.; Ambrózio, A.
2017-12-01
Inland water scenes are highly variable, both in space and time, which leads to a much broader range of radar signatures than ocean surfaces. This applies to both LRM and "SAR" mode (SARM) altimetry. Nevertheless the enhanced along-track resolution of SARM altimeters should help improve the accuracy and precision of inland water height measurements from satellite. The SHAPE project - Sentinel-3 Hydrologic Altimetry Processor prototypE - which is funded by ESA through the Scientific Exploitation of Operational Missions Programme Element (contract number 4000115205/15/I-BG) aims at preparing for the exploitation of Sentinel-3 data over the inland water domain. The SHAPE Processor implements all of the steps necessary to derive rivers and lakes water levels and discharge from Delay-Doppler Altimetry and perform their validation against in situ data. The processor uses FBR CryoSat-2 and L1A Sentinel-3A data as input and also various ancillary data (proc. param., water masks, L2 corrections, etc.), to produce surface water levels. At a later stage, water level data are assimilated into hydrological models to derive river discharge. This poster presents the improvements obtained with the new methods and algorithms over the regions of interest (Amazon and Danube rivers, Vanern and Titicaca lakes).
Automatic generation of Web mining environments
NASA Astrophysics Data System (ADS)
Cibelli, Maurizio; Costagliola, Gennaro
1999-02-01
The main problem related to the retrieval of information from the world wide web is the enormous number of unstructured documents and resources, i.e., the difficulty of locating and tracking appropriate sources. This paper presents a web mining environment (WME), which is capable of finding, extracting and structuring information related to a particular domain from web documents, using general purpose indices. The WME architecture includes a web engine filter (WEF), to sort and reduce the answer set returned by a web engine, a data source pre-processor (DSP), which processes html layout cues in order to collect and qualify page segments, and a heuristic-based information extraction system (HIES), to finally retrieve the required data. Furthermore, we present a web mining environment generator, WMEG, that allows naive users to generate a WME specific to a given domain by providing a set of specifications.
Trends of Diversification and Expansion in Israeli Higher Education.
ERIC Educational Resources Information Center
Guri-Rozenblit, Sarah
1993-01-01
A discussion of recent changes in Israeli higher education looks at the system's structure, emergence of private law schools, the upgrading of some vocationally oriented postsecondary institutions to academic status, academic tracks of study within regional colleges, and possible future developments. Some comparisons are made with trends in other…
Zhang, Banglin; Tallapragada, Vijay; Weng, Fuzhong; Liu, Qingfu; Sippel, Jason A.; Ma, Zaizhong; Bender, Morris A.
2016-01-01
The atmosphere−ocean coupled Hurricane Weather Research and Forecast model (HWRF) developed at the National Centers for Environmental Prediction (NCEP) is used as an example to illustrate the impact of model vertical resolution on track forecasts of tropical cyclones. A number of HWRF forecasting experiments were carried out at different vertical resolutions for Hurricane Joaquin, which occurred from September 27 to October 8, 2015, in the Atlantic Basin. The results show that the track prediction for Hurricane Joaquin is much more accurate with higher vertical resolution. The positive impacts of higher vertical resolution on hurricane track forecasts suggest that National Oceanic and Atmospheric Administration/NCEP should upgrade both HWRF and the Global Forecast System to have more vertical levels. PMID:27698121
NASA Astrophysics Data System (ADS)
Wright, Adam A.; Momin, Orko; Shin, Young Ho; Shakya, Rahul; Nepal, Kumud; Ahlgren, David J.
2010-01-01
This paper presents the application of a distributed systems architecture to an autonomous ground vehicle, Q, that participates in both the autonomous and navigation challenges of the Intelligent Ground Vehicle Competition. In the autonomous challenge the vehicle is required to follow a course, while avoiding obstacles and staying within the course boundaries, which are marked by white lines. For the navigation challenge, the vehicle is required to reach a set of target destinations, known as way points, with given GPS coordinates and avoid obstacles that it encounters in the process. Previously the vehicle utilized a single laptop to execute all processing activities including image processing, sensor interfacing and data processing, path planning and navigation algorithms and motor control. National Instruments' (NI) LabVIEW served as the programming language for software implementation. As an upgrade to last year's design, a NI compact Reconfigurable Input/Output system (cRIO) was incorporated to the system architecture. The cRIO is NI's solution for rapid prototyping that is equipped with a real time processor, an FPGA and modular input/output. Under the current system, the real time processor handles the path planning and navigation algorithms, the FPGA gathers and processes sensor data. This setup leaves the laptop to focus on running the image processing algorithm. Image processing as previously presented by Nepal et. al. is a multi-step line extraction algorithm and constitutes the largest processor load. This distributed approach results in a faster image processing algorithm which was previously Q's bottleneck. Additionally, the path planning and navigation algorithms are executed more reliably on the real time processor due to the deterministic nature of operation. The implementation of this architecture required exploration of various inter-system communication techniques. Data transfer between the laptop and the real time processor using UDP packets was established as the most reliable protocol after testing various options. Improvement can be made to the system by migrating more algorithms to the hardware based FPGA to further speed up the operations of the vehicle.
Yang, Haw; Welsher, Kevin
2016-11-15
A system and method for non-invasively tracking a particle in a sample is disclosed. The system includes a 2-photon or confocal laser scanning microscope (LSM) and a particle-holding device coupled to a stage with X-Y and Z position control. The system also includes a tracking module having a tracking excitation laser, X-Y and Z radiation-gathering components configured to detect deviations of the particle in an X-Y and Z directions. The system also includes a processor coupled to the X-Y and Z radiation gathering components, generate control signals configured to drive the stage X-Y and Z position controls to track the movement of the particle. The system may also include a synchronization module configured to generate LSM pixels stamped with stage position and a processing module configured to generate a 3D image showing the 3D trajectory of a particle using the LSM pixels stamped with stage position.
Ice tracking techniques, implementation, performance, and applications
NASA Technical Reports Server (NTRS)
Rothrock, D. A.; Carsey, F. D.; Curlander, J. C.; Holt, B.; Kwok, R.; Weeks, W. F.
1992-01-01
Present techniques of ice tracking make use both of cross-correlation and of edge tracking, the former being more successful in heavy pack ice, the latter being critical for the broken ice of the pack margins. Algorithms must assume some constraints on the spatial variations of displacements to eliminate fliers, but must avoid introducing any errors into the spatial statistics of the measured displacement field. We draw our illustrations from the implementation of an automated tracking system for kinematic analyses of ERS-1 and JERS-1 SAR imagery at the University of Alaska - the Alaska SAR Facility's Geophysical Processor System. Analyses of the ice kinematic data that might have some general interest to analysts of cloud-derived wind fields are the spatial structure of the fields, and the evaluation and variability of average deformation and its invariants: divergence, vorticity and shear. Many problems in sea ice dynamics and mechanics can be addressed with the kinematic data from SAR.
Adaptive DFT-based Interferometer Fringe Tracking
NASA Technical Reports Server (NTRS)
Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.
2004-01-01
An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) observatory at Mt. Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse.
A Scalability Model for ECS's Data Server
NASA Technical Reports Server (NTRS)
Menasce, Daniel A.; Singhal, Mukesh
1998-01-01
This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.
USDA-ARS?s Scientific Manuscript database
Introduction: Commonly, ground beef processors conduct studies to model contaminant flow through their production systems using surrogate organisms. Typical surrogate organisms may not behave as Escherichia coli O157:H7 during grinding and are not easy to detect at very low levels. Purpose: Develop...
Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi
2012-10-22
This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.
Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi
2012-01-01
This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040
Stereo and IMU-Assisted Visual Odometry for Small Robots
NASA Technical Reports Server (NTRS)
2012-01-01
This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
P Kollias; MA Miller; KB Widener
2005-12-30
The United States (U.S.) Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility (ACRF) operates millimeter wavelength cloud radars (MMCRs) in several climatological regimes. The MMCRs, are the primary observing tool for quantifying the properties of nearly all radiatively important clouds over the ACRF sites. The first MMCR was installed at the ACRF Southern Great Plains (SGP) site nine years ago and its original design can be traced to the early 90s. Since then, several MMCRs have been deployed at the ACRF sites, while no significant hardware upgrades have been performed. Recently, a two-stage upgrade (first C-40 Digitalmore » Signal Processors [DSP]-based, and later the PC-Integrated Radar AcQuisition System [PIRAQ-III] digital receiver) of the MMCR signal-processing units was completed. Our future MMCR related goals are: 1) to have a cloud radar system that continues to have high reliability and uptime and 2) to suggest potential improvements that will address increased sensitivity needs, superior sampling and low cost maintenance of the MMCRs. The Traveling Wave Tube (TWT) technology, the frequency (35-GHz), the radio frequency (RF) layout, antenna, the calibration and radar control procedure and the environmental enclosure of the MMCR remain assets for our ability to detect the profile of hydrometeors at all heights in the troposphere at the ACRF sites.« less
NASA Technical Reports Server (NTRS)
Edwards, C. D.
1990-01-01
Connected-element interferometry (CEI) has the potential to provide high-accuracy angular spacecraft tracking on short baselines by making use of the very precise phase delay observable. Within the Goldstone Deep Space Communications Complex (DSCC), one of three tracking complexes in the NASA Deep Space Network, baselines of up to 21 km in length are available. Analysis of data from a series of short-baseline phase-delay interferometry experiments are presented to demonstrate the potential tracking accuracy on these baselines. Repeated differential observations of pairs of angularly close extragalactic radio sources were made to simulate differential spacecraft-quasar measurements. Fiber-optic data links and a correlation processor are currently being developed and installed at Goldstone for a demonstration of real-time CEI in 1990.
A Scalable Distributed Approach to Mobile Robot Vision
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.
1997-01-01
This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).
UKIRT fast guide system improvements
NASA Astrophysics Data System (ADS)
Balius, Al; Rees, Nicholas P.
1997-09-01
The United Kingdom Infra-Red Telescope (UKIRT) has recently undergone the first major upgrade program since its construction. One part of the upgrade program was an adaptive tip-tilt secondary mirror closed with a CCD system collectively called the fast guide system. The installation of the new secondary and associated systems was carried out in the first half of 1996. Initial testing of the fast guide system has shown great improvement in guide accuracy. The initial installation included a fixed integration time CCD. In the first part of 1997 an integration time controller based on computed guide star luminosity was implemented in the fast guide system. Also, a Kalman type estimator was installed in the image tracking loop based on a dynamic model and knowledge of the statistical properties of the guide star position error measurement as a function of computed guide star magnitude and CCD integration time. The new configuration was tested in terms of improved guide performance nd graceful degradation when tracking faint guide stars. This paper describes the modified fast guide system configuration and reports the results of performance tests.
Kalman filter tracking on parallel architectures
NASA Astrophysics Data System (ADS)
Cerati, G.; Elmer, P.; Krutelyov, S.; Lantz, S.; Lefebvre, M.; McDermott, K.; Riley, D.; Tadel, M.; Wittich, P.; Wurthwein, F.; Yagil, A.
2017-10-01
We report on the progress of our studies towards a Kalman filter track reconstruction algorithm with optimal performance on manycore architectures. The combinatorial structure of these algorithms is not immediately compatible with an efficient SIMD (or SIMT) implementation; the challenge for us is to recast the existing software so it can readily generate hundreds of shared-memory threads that exploit the underlying instruction set of modern processors. We show how the data and associated tasks can be organized in a way that is conducive to both multithreading and vectorization. We demonstrate very good performance on Intel Xeon and Xeon Phi architectures, as well as promising first results on Nvidia GPUs.
Techniques in processing multi-frequency multi-polarization spaceborne SAR data
NASA Technical Reports Server (NTRS)
Curlander, John C.; Chang, C. Y.
1991-01-01
This paper presents the algorithm design of the SIR-C ground data processor, with emphasis on the unique elements involved in the production of registered multifrequency polarimetric data products. A quick-look processing algorithm used for generation of low-resolution browse image products and estimation of echo signal parameters is also presented. Specifically the discussion covers: (1) azimuth reference function generation to produce registered polarimetric imagery; (2) geometric rectification to accommondate cross-track and along-track Doppler drifts; (3) multilook filtering designed to generate output imagery with a uniform resolution; and (4) efficient coding to compress the polarimetric image data for distribution.
NASA Technical Reports Server (NTRS)
Velden, Christopher
1995-01-01
The research objectives in this proposal were part of a continuing program at UW-CIMSS to develop and refine an automated geostationary satellite winds processing system which can be utilized in both research and operational environments. The majority of the originally proposed tasks were successfully accomplished, and in some cases the progress exceeded the original goals. Much of the research and development supported by this grant resulted in upgrades and modifications to the existing automated satellite winds tracking algorithm. These modifications were put to the test through case study demonstrations and numerical model impact studies. After being successfully demonstrated, the modifications and upgrades were implemented into the NESDIS algorithms in Washington DC, and have become part of the operational support. A major focus of the research supported under this grant attended to the continued development of water vapor tracked winds from geostationary observations. The fully automated UW-CIMSS tracking algorithm has been tuned to provide complete upper-tropospheric coverage from this data source, with data set quality close to that of operational cloud motion winds. Multispectral water vapor observations were collected and processed from several different geostationary satellites. The tracking and quality control algorithms were tuned and refined based on ground-truth comparisons and case studies involving impact on numerical model analyses and forecasts. The results have shown the water vapor motion winds are of good quality, complement the cloud motion wind data, and can have a positive impact in NWP on many meteorological scales.
ILRS: Current Status and Future Challenges
NASA Astrophysics Data System (ADS)
Pearlman, M. R.; Bianco, G.; Merkowitz, S.; Noll, C. E.; Pavlis, E. C.; Shargorodsky, V.; Zhongping, Z.
2016-12-01
The International Laser Ranging Service (ILRS) is expanding its ground tracking capability with new stations and upgrades to current stations. Our Russian colleagues have installed new stations in Brasilia and South Africa, and have several other sites in process or in planning. The NASA Space Geodesy Program is preparing equipment for U.S. sites (McDonald and Haleakala) and with the Norwegian National Mapping Agency in Ny Ålesund; further deployments are planned. Upgrades continue at sites in China, and new sites are underway or planned in Europe and India. Stations are moving to higher repetition rates and more efficient detection to enhance satellite interleaving capability; some stations have already implemented automated processes that could lead to around-the-clock operation to increase temporal coverage and to make more efficient use of personnel. The ILRS roster of supported satellites continues to grow with the addition of the LARES satellite to augment tracking for the improvement of the ITRF. New GNSS constellations and geosynchronous satellites now bring the total roster to over 80 satellites - so much so, that new tracking strategies and time and location multiplexing are under consideration. There continues to be strong interest in Lunar Ranging. New applications of one-way and two-way laser ranging include ps-accurate time transfer, laser transponders for interplanetary ranging, and tracking of space debris. New laser ranging data products are being developed, including satellite orbit products, satellite orientation, gravity field products, and products to characterize the quality of data and station performance. This talk will give a brief summary of recent progress, current challenges and a view of the path ahead.
Parallel design patterns for a low-power, software-defined compressed video encoder
NASA Astrophysics Data System (ADS)
Bruns, Michael W.; Hunt, Martin A.; Prasad, Durga; Gunupudi, Nageswara R.; Sonachalam, Sekar
2011-06-01
Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology, but many other applications require a software-defined encoder. High quality compression features needed for some applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer electronics device. An application may also need to efficiently combine compression with other functions such as noise reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low power, field upgradable implementation. Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be used to express all of the encoding processes including motion compensation, transform and quantization, and entropy coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported without the need for explicit global synchronization control. An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork processor device.
DARPA Orbital Express program: effecting a revolution in space-based systems
NASA Astrophysics Data System (ADS)
Whelan, David A.; Adler, E. A.; Wilson, Samuel B., III; Roesler, Gordon M., Jr.
2000-11-01
A primary goal of the Defense Advanced Research Projects Agency is to develop innovative, high-risk technologies with the potential of a revolutionary impact on missions of the Department of Defense. DARPA is developing a space experiment to prove the feasibility of autonomous on- orbit servicing of spacecraft. The Orbital Express program will demonstrate autonomous on-orbit refueling, as well as autonomous delivery of a small payload representing an avionics upgrade package. The maneuverability provided to spacecraft from a ready refueling infrastructure will enable radical new capabilities for the military, civil and commercial spacecraft. Module replacement has the potential to extend bus lifetimes, and to upgrade the performance of key subsystems (e.g. processors) at the pace of technology development. The Orbital Express technology development effort will include the necessary autonomy for a viable servicing infrastructure; a universal interface for docking, refueling and module transfers; and a spacecraft bus design compatible with this servicing concept. The servicer spacecraft of the future may be able to act as a host platform for microsatellites, extending their capabilities while reducing risk. An infrastructure based on Orbital Express also benefits from, and stimulates the development of, lower-cost launch strategies.
NASA Astrophysics Data System (ADS)
Meng, X. T.; Levin, D. S.; Chapman, J. W.; Zhou, B.
2016-09-01
The ATLAS Muon Spectrometer endcap thin-Resistive Plate Chamber trigger project compliments the New Small Wheel endcap Phase-1 upgrade for higher luminosity LHC operation. These new trigger chambers, located in a high rate region of ATLAS, will improve overall trigger acceptance and reduce the fake muon trigger incidence. These chambers must generate a low level muon trigger to be delivered to a remote high level processor within a stringent latency requirement of 43 bunch crossings (1075 ns). To help meet this requirement the High Performance Time to Digital Converter (HPTDC), a multi-channel ASIC designed by CERN Microelectronics group, has been proposed for the digitization of the fast front end detector signals. This paper investigates the HPTDC performance in the context of the overall muon trigger latency, employing detailed behavioral Verilog simulations in which the latency in triggerless mode is measured for a range of configurations and under realistic hit rate conditions. The simulation results show that various HPTDC operational configurations, including leading edge and pair measurement modes can provide high efficiency (>98%) to capture and digitize hits within a time interval satisfying the Phase-1 latency tolerance.
Status and Plan for The Upgrade of The CMS Pixel Detector
NASA Astrophysics Data System (ADS)
Lu, Rong-Shyang; CMS Collaboration
2016-04-01
The silicon pixel detector is the innermost component of the CMS tracking system and plays a crucial role in the all-silicon CMS tracker. While the current pixel tracker is designed for and performing well at an instantaneous luminosity of up to 1 ×1034cm-2s-1, it can no longer be operated efficiently at significantly higher values. Based on the strong performance of the LHC accelerator, it is anticipated that peak luminosities of two times the design luminosity are likely to be reached before 2018 and perhaps significantly exceeded in the running period until 2022, referred to as LHC Run 3. Therefore, an upgraded pixel detector, referred to as the phase 1 upgrade, is planned for the year-end technical stop in 2016. With a new pixel readout chip (ROC), an additional fourth layer, two additional endcap disks, and a significantly reduced material budget the upgraded pixel detector will be able to sustain the efficiency of the pixel tracker at the increased requirements imposed by high luminosities and pile-up. The main new features of the upgraded pixel detector will be an ultra-light mechanical design, a digital readout chip with higher rate capability and a new cooling system. These and other design improvements, along with results of Monte Carlo simulation studies for the expected performance of the new pixel detector, will be discussed and compared to those of the current CMS detector.
NASA Astrophysics Data System (ADS)
Hennessy, Karol; LHCb VELO Upgrade Collaboration
2017-02-01
The upgrade of the LHCb experiment, scheduled for LHC Run-III, scheduled to start in 2021, will transform the experiment to a trigger-less system reading out the full detector at 40 MHz event rate. All data reduction algorithms will be executed in a high-level software farm enabling the detector to run at luminosities of 2×1033 cm-2 s-1. The Vertex Locator (VELO) is the silicon vertex detector surrounding the interaction region. The current detector will be replaced with a hybrid pixel system equipped with electronics capable of reading out at 40 MHz. The upgraded VELO will provide fast pattern recognition and track reconstruction to the software trigger. The silicon pixel sensors have 55×55 μm2 pitch, and are read out by the VeloPix ASIC, from the Timepix/Medipix family. The hottest region will have pixel hit rates of 900 Mhits/s yielding a total data rate of more than 3 Tbit/s for the upgraded VELO. The detector modules are located in a separate vacuum, separated from the beam vacuum by a thin custom made foil. The foil will be manufactured through milling and possibly thinned further by chemical etching. The material budget will be minimised by the use of evaporative CO2 coolant circulating in microchannels within 400 μm thick silicon substrates. The current status of the VELO upgrade is described and latest results from operation of irradiated sensor assemblies are presented.
Advanced Physiological Estimation of Cognitive Status. Part 2
2011-05-24
Neurofeedback Algorithms and Gaze Controller EEG Sensor System g.USBamp *, ** • internal 24-bit ADC and digital signal processor • 16 channels (expandable...SUBJECT TERMS EEG eye-tracking mental state estimation machine learning Leonard J. Trejo Pacific Development and Technology LLC 999 Commercial St. Palo...fatigue, overload) Technology Transfer Opportunity Technology from PDT – Methods to acquire various physiological signals ( EEG , EOG, EMG, ECG, etc
Analysis and simulation tools for solar array power systems
NASA Astrophysics Data System (ADS)
Pongratananukul, Nattorn
This dissertation presents simulation tools developed specifically for the design of solar array power systems. Contributions are made in several aspects of the system design phases, including solar source modeling, system simulation, and controller verification. A tool to automate the study of solar array configurations using general purpose circuit simulators has been developed based on the modeling of individual solar cells. Hierarchical structure of solar cell elements, including semiconductor properties, allows simulation of electrical properties as well as the evaluation of the impact of environmental conditions. A second developed tool provides a co-simulation platform with the capability to verify the performance of an actual digital controller implemented in programmable hardware such as a DSP processor, while the entire solar array including the DC-DC power converter is modeled in software algorithms running on a computer. This "virtual plant" allows developing and debugging code for the digital controller, and also to improve the control algorithm. One important task in solar arrays is to track the maximum power point on the array in order to maximize the power that can be delivered. Digital controllers implemented with programmable processors are particularly attractive for this task because sophisticated tracking algorithms can be implemented and revised when needed to optimize their performance. The proposed co-simulation tools are thus very valuable in developing and optimizing the control algorithm, before the system is built. Examples that demonstrate the effectiveness of the proposed methodologies are presented. The proposed simulation tools are also valuable in the design of multi-channel arrays. In the specific system that we have designed and tested, the control algorithm is implemented on a single digital signal processor. In each of the channels the maximum power point is tracked individually. In the prototype we built, off-the-shelf commercial DC-DC converters were utilized. At the end, the overall performance of the entire system was evaluated using solar array simulators capable of simulating various I-V characteristics, and also by using an electronic load. Experimental results are presented.
SDAV Viz July Progress Update: LANL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher Meyer
2012-07-30
SDAV Viz July Progress Update: (1) VPIC (Vector Particle in Cell) Kinetic Plasma Simulation Code - (a) Implemented first version of an in-situ adapter based on Paraview CoProcessing Library, (b) Three pipelines: vtkDataSetMapper, vtkContourFilter, vtkPistonContour, (c) Next, resolve issue at boundaries of processor domains; add more advanced viz/analysis pipelines; (2) Halo finding/merger trees - (a) Summer student Wathsala W. from University of Utah is working on data-parallel halo finder algorithm using PISTON, (b) Timo Bremer (LLNL), Valerio Pascucci (Utah), George Zagaris (Kitware), and LANL people are interested in using merger trees for tracking the evolution of halos in cosmo simulations;more » discussed possible overlap with work by Salman Habib and Katrin Heitmann (Argonne) during their visit to LANL 7/11; (3) PISTON integration in ParaView - Now available from ParaView github.« less
Prototype Test Results for the Single Photon Detection SLR2000 Satellite Laser Ranging System
NASA Technical Reports Server (NTRS)
Zagwodzki, Thomas W.; McGarry, Jan F.; Degnan, John J.; Cheek, Jack W.; Dunn, Peter J.; Patterson, Don; Donovan, Howard
2004-01-01
NASA's aging Satellite Laser Ranging (SLR) network is scheduled to be replaced over the next few years with a fully automated single photon detection system. A prototype of this new system, called SLR2000, is currently undergoing field trials at the Goddard Space Flight Center in Greenbelt, Maryland to evaluate photon counting techniques and determine system hardware, software, and control algorithm performance levels and limitations. Newly developed diode pumped microchip lasers and quadrant microchannel plate-based photomultiplier tubes have enabled the development of this high repetition rate single photon detection SLR system. The SLR2000 receiver threshold is set at the single photoelectron (pe) level but tracks satellites with an average signal level typically much less than 1 pe. The 2 kHz laser fire rate aids in satellite acquisition and tracking and will enable closed loop tracking by accumulating single photon count statistics in a quadrant detector and using this information to correct for pointing errors. Laser transmitter beamwidths of 10 arcseconds (FWHM) or less are currently being used to maintain an adequate signal level for tracking while the receiver field of view (FOV) has been opened to 40 arcseconds to accommodate point ahead/look behind angular offsets. In the near future, the laser transmitter point ahead will be controlled by a pair of Risley prisms. This will allow the telescope to point behind and enable closure of the receiver FOV to roughly match the transmitter beam divergence. Bandpass filters (BPF) are removed for night tracking operations while 0.2 nm or 1 nm filters are used during daylight operation. Both day and night laser tracking of Low Earth Orbit (LEO) satellites has been achieved with a laser transmitter energy of only 65 microjoules per pulse. Satellite tracking is presently limited to LEO satellites until the brassboard laser transmitter can be upgraded or replaced. Simultaneous tracks have also been observed with NASA s SLR standard, MOBLAS 7, for the purposes of data comparison and identification of biases. Work continues to optimize the receive optics; upgrade or replace the laser transmitter; calibrate the quadrant detector, the point ahead Risley prisms, and event timer verniers; and test normal point generation with SLR2000 data. This paper will report on the satellite tracking results to date, issues yet to be resolved, and future plans for the SLR2000 system.
Recent Upgrades at the Fermilab Test Beam Facility
NASA Astrophysics Data System (ADS)
Rominsky, Mandy
2016-03-01
The Fermilab Test Beam Facility is a world class facility for testing and characterizing particle detectors. The facility has been in operation since 2005 and has undergone significant upgrades in the last two years. A second beam line with cryogenic support has been added and the facility has adopted the MIDAS data acquisition system. The facility also recently added a cosmic telescope test stand and improved tracking capabilities. With two operational beam lines, the facility can deliver a variety of particle types and momenta ranging from 120 GeV protons in the primary beam line down to 200 MeV particles in the tertiary beam line. In addition, recent work has focused on analyzing the beam structure to provide users with information on the data they are collecting. With these improvements, the Fermilab Test Beam facility is capable of supporting High Energy physics applications as well as industry users. The upgrades will be discussed along with plans for future improvements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schambach, Joachim; Rossewij, M. J.; Sielewicz, K. M.
The ALICE Collaboration is preparing a major detector upgrade for the LHC Run 3, which includes the construction of a new silicon pixel based Inner Tracking System (ITS). The ITS readout system consists of 192 readout boards to control the sensors and their power system, receive triggers, and deliver sensor data to the DAQ. To prototype various aspects of this readout system, an FPGA based carrier board and an associated FMC daughter card containing the CERN Gigabit Transceiver (GBT) chipset have been developed. Furthermore, this contribution describes laboratory and radiation testing results with this prototype board set.
Schambach, Joachim; Rossewij, M. J.; Sielewicz, K. M.; ...
2016-12-28
The ALICE Collaboration is preparing a major detector upgrade for the LHC Run 3, which includes the construction of a new silicon pixel based Inner Tracking System (ITS). The ITS readout system consists of 192 readout boards to control the sensors and their power system, receive triggers, and deliver sensor data to the DAQ. To prototype various aspects of this readout system, an FPGA based carrier board and an associated FMC daughter card containing the CERN Gigabit Transceiver (GBT) chipset have been developed. Furthermore, this contribution describes laboratory and radiation testing results with this prototype board set.
NASA Astrophysics Data System (ADS)
Schambach, J.; Rossewij, M. J.; Sielewicz, K. M.; Aglieri Rinella, G.; Bonora, M.; Ferencei, J.; Giubilato, P.; Vanat, T.
2016-12-01
The ALICE Collaboration is preparing a major detector upgrade for the LHC Run 3, which includes the construction of a new silicon pixel based Inner Tracking System (ITS). The ITS readout system consists of 192 readout boards to control the sensors and their power system, receive triggers, and deliver sensor data to the DAQ. To prototype various aspects of this readout system, an FPGA based carrier board and an associated FMC daughter card containing the CERN Gigabit Transceiver (GBT) chipset have been developed. This contribution describes laboratory and radiation testing results with this prototype board set.
Ambiguous data association and entangled attribute estimation
NASA Astrophysics Data System (ADS)
Trawick, David J.; Du Toit, Philip C.; Paffenroth, Randy C.; Norgard, Gregory J.
2012-05-01
This paper presents an approach to attribute estimation incorporating data association ambiguity. In modern tracking systems, time pressures often leave all but the most likely data association alternatives unexplored, possibly producing track inaccuracies. Numerica's Bayesian Network Tracking Database, a key part of its Tracker Adjunct Processor, captures and manages the data association ambiguity for further analysis and possible ambiguity reduction/resolution using subsequent data. Attributes are non-kinematic discrete sample space sensor data. They may be as distinctive as aircraft ID, or as broad as friend or foe. Attribute data may provide improvements to data association by a process known as Attribute Aided Tracking (AAT). Indeed, certain uniquely identifying attributes (e.g. aircraft ID), when continually reported, can be used to define data association (tracks are the collections of observations with the same ID). However, attribute data arriving infrequently, combined with erroneous choices from ambiguous data associations, can produce incorrect attribute and kinematic state estimation. Ambiguous data associations define the tracks that are entangled with each other. Attribute data observed on an entangled track then modify the attribute estimates on all tracks entangled with it. For example, if a red track and a blue track pass through a region of data association ambiguity, these tracks become entangled. Later red observations on one entangled track make the other track more blue, and reduce the data association ambiguity. Methods for this analysis have been derived and implemented for efficient forward filtering and forensic analysis.
Bringing Algorithms to Life: Cooperative Computing Activities Using Students as Processors.
ERIC Educational Resources Information Center
Bachelis, Gregory F.; And Others
1994-01-01
Presents cooperative computing activities in which each student plays the role of a switch or processor and acts out algorithms. Includes binary counting, finding the smallest card in a deck, sorting by selection and merging, adding and multiplying large numbers, and sieving for primes. (16 references) (Author/MKR)
A computerized aircraft battery servicing facility
NASA Technical Reports Server (NTRS)
Glover, Richard D.
1992-01-01
The latest upgrade to the Aerospace Energy Systems Laboratory (AESL) is described. The AESL is a distributed digital system consisting of a central system and battery servicing stations connected by a high-speed serial data bus. The entire system is located in two adjoining rooms; the bus length is approximately 100 ft. Each battery station contains a digital processor, data acquisition, floppy diskette data storage, and operator interfaces. The operator initiates a servicing task and thereafter the battery station monitors the progress of the task and terminates it at the appropriate time. The central system provides data archives, manages the data bus, and provides a timeshare interface for multiple users. The system also hosts software production tools for the battery stations and the central system.
Estimating the recreational-use value for hiking in Bellenden Ker National Park, Australia.
Nillesen, Eleonora; Wesseler, Justus; Cook, Averil
2005-08-01
The recreational-use value of hiking in the Bellenden Ker National Park, Australia has been estimated using a zonal travel cost model. Multiple destination visitors have been accounted for by converting visitors' own ordinal ranking of the various sites visited to numerical weights, using an expected-value approach. The value of hiking and camping in this national park was found to be dollar AUS 250,825 per year, or dollar AUS 144,45 per visitor per year, which is similar to findings from other studies valuing recreational benefits. The management of the park can use these estimates when considering the introduction of a system of user pays fees. In addition, they might be important when decisions need to be made about the allocation of resources for maintenance or upgrade of tracks and facilities.
Beam Tests of Diamond-Like Carbon Coating for Mitigation of Electron Cloud
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Jeffrey; Backfish, Michael; Kato, Shigeki
Electron cloud beam instabilities are an important consideration in virtually all high-energy particle accelerators and could pose a formidable challenge to forthcoming high-intensity accelerator upgrades. Our results evaluate the efficacy of a diamond-like carbon (DLC) coating for the mitigation of electron in the Fermilab Main Injector. The interior surface of the beampipe conditions in response to electron bombardment from the electron cloud and we track the change in electron cloud flux over time in the DLC coated beampipe and uncoated stainless steel beampipe. The electron flux is measured by retarding field analyzers placed in a field-free region of the Mainmore » Injector. We find the DLC coating reduces the electron cloud signal to roughly 2\\% of that measured in the uncoated stainless steel beampipe.« less
L1 track triggers for ATLAS in the HL-LHC
Lipeles, E.
2012-01-01
The HL-LHC, the planned high luminosity upgrade for the LHC, will increase the collision rate in the ATLAS detector approximately a factor of 5 beyond the luminosity for which the detectors were designed, while also increasing the number of pile-up collisions in each event by a similar factor. This means that the level-1 trigger must achieve a higher rejection factor in a more difficult environment. This presentation discusses the challenges that arise in this environment and strategies being considered by ATLAS to include information from the tracking systems in the level-1 decision. The main challenges involve reducing the data volumemore » exported from the tracking system for which two options are under consideration: a region of interest based system and an intelligent sensor method which filters on hits likely to come from higher transverse momentum tracks.« less
MW 08-multi-beam air and surface surveillance radar
NASA Astrophysics Data System (ADS)
1989-09-01
Signal of the Netherlands has developed and is marketing the MW 08, a 3-D radar to be used for short to medium range surveillance, target acquisition, and tracking. MW 08 is a fully automated detecting and tracking radar. It is designed to counter threats from aircraft and low flying antiship missiles. It can also deal with the high level missile threat. MW 08 operates in the 5 cm band using one antenna for both transmitting and receiving. The antenna is an array, consisting of 8 stripline antennas. The received radar energy is processed by 8 receiver channels. These channels come together in the beam forming network, in which 8 virtual beams are formed. From this beam pattern, 6 beams are used for the elevation coverage of 0-70 degrees. MW 08's output signals of the beam former are further handled by FFT and plot processors for target speed information, clutter rejection, and jamming suppression. A general purpose computer handles target track initiation, and tracking. Tracking data are transferred to the command and control systems with 3-D target information for fastest possible lockon.
RASSP signal processing architectures
NASA Astrophysics Data System (ADS)
Shirley, Fred; Bassett, Bob; Letellier, J. P.
1995-06-01
The rapid prototyping of application specific signal processors (RASSP) program is an ARPA/tri-service effort to dramatically improve the process by which complex digital systems, particularly embedded signal processors, are specified, designed, documented, manufactured, and supported. The domain of embedded signal processing was chosen because it is important to a variety of military and commercial applications as well as for the challenge it presents in terms of complexity and performance demands. The principal effort is being performed by two major contractors, Lockheed Sanders (Nashua, NH) and Martin Marietta (Camden, NJ). For both, improvements in methodology are to be exercised and refined through the performance of individual 'Demonstration' efforts. The Lockheed Sanders' Demonstration effort is to develop an infrared search and track (IRST) processor. In addition, both contractors' results are being measured by a series of externally administered (by Lincoln Labs) six-month Benchmark programs that measure process improvement as a function of time. The first two Benchmark programs are designing and implementing a synthetic aperture radar (SAR) processor. Our demonstration team is using commercially available VME modules from Mercury Computer to assemble a multiprocessor system scalable from one to hundreds of Intel i860 microprocessors. Custom modules for the sensor interface and display driver are also being developed. This system implements either proprietary or Navy owned algorithms to perform the compute-intensive IRST function in real time in an avionics environment. Our Benchmark team is designing custom modules using commercially available processor ship sets, communication submodules, and reconfigurable logic devices. One of the modules contains multiple vector processors optimized for fast Fourier transform processing. Another module is a fiberoptic interface that accepts high-rate input data from the sensors and provides video-rate output data to a display. This paper discusses the impact of simulation on choosing signal processing algorithms and architectures, drawing from the experiences of the Demonstration and Benchmark inter-company teams at Lockhhed Sanders, Motorola, Hughes, and ISX.
Symplectic multi-particle tracking on GPUs
NASA Astrophysics Data System (ADS)
Liu, Zhicong; Qiang, Ji
2018-05-01
A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.
A game theory approach to target tracking in sensor networks.
Gu, Dongbing
2011-02-01
In this paper, we investigate a moving-target tracking problem with sensor networks. Each sensor node has a sensor to observe the target and a processor to estimate the target position. It also has wireless communication capability but with limited range and can only communicate with neighbors. The moving target is assumed to be an intelligent agent, which is "smart" enough to escape from the detection by maximizing the estimation error. This adversary behavior makes the target tracking problem more difficult. We formulate this target estimation problem as a zero-sum game in this paper and use a minimax filter to estimate the target position. The minimax filter is a robust filter that minimizes the estimation error by considering the worst case noise. Furthermore, we develop a distributed version of the minimax filter for multiple sensor nodes. The distributed computation is implemented via modeling the information received from neighbors as measurements in the minimax filter. The simulation results show that the target tracking algorithm proposed in this paper provides a satisfactory result.
Extraterrestrial surface propulsion systems
NASA Astrophysics Data System (ADS)
Ash, Robert L.; Blackstock, Dexter L.; Barnhouse, K.; Charalambous, Z.; Coats, J.; Danagan, J.; Davis, T.; Dickens, J.; Harris, P.; Horner, G.
Lunar traction systems, Mars oxygen production, and Mars methane engine operation were the three topics studied during 1992. An elastic loop track system for lunar construction operations was redesigned and is being tested. A great deal of work on simulating the lunar environment to facilitate traction testing has been reported. Operation of an oxygen processor under vacuum conditions has been the focus of another design team. They have redesigned the processor facility. This included improved seals and heat shields. Assuming methane and oxygen can be produced from surface resources on Mars, a third design team has addressed the problem of using Mars atmospheric carbon dioxide to control combustion temperatures in an internal combustion engine. That team has identified appropriate tests and instrumentation. They have reported on the test rig that they designed and the computer-based system for acquiring data.
Extraterrestrial surface propulsion systems
NASA Technical Reports Server (NTRS)
Ash, Robert L.; Blackstock, Dexter L.; Barnhouse, K.; Charalambous, Z.; Coats, J.; Danagan, J.; Davis, T.; Dickens, J.; Harris, P.; Horner, G.
1992-01-01
Lunar traction systems, Mars oxygen production, and Mars methane engine operation were the three topics studied during 1992. An elastic loop track system for lunar construction operations was redesigned and is being tested. A great deal of work on simulating the lunar environment to facilitate traction testing has been reported. Operation of an oxygen processor under vacuum conditions has been the focus of another design team. They have redesigned the processor facility. This included improved seals and heat shields. Assuming methane and oxygen can be produced from surface resources on Mars, a third design team has addressed the problem of using Mars atmospheric carbon dioxide to control combustion temperatures in an internal combustion engine. That team has identified appropriate tests and instrumentation. They have reported on the test rig that they designed and the computer-based system for acquiring data.
Schedulers with load-store queue awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Tong; Eichenberger, Alexandre E.; Jacob, Arpith C.
2017-02-07
In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.
Schedulers with load-store queue awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Tong; Eichenberger, Alexandre E.; Jacob, Arpith C.
2017-01-24
In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.
Dual Purkinje-Image Eyetracker
1996-01-01
Abnormal nystagmus can also be detected through the use of an eyetracker [4]. Through tracking points of eye gaze within a scene, it is possible to...moving, even when gazing . Correcting for these unpredictable micro eye movements would allow corrective procedures in eye surgery to become more accurate...victim with a screen of letters on a monitor. A calibrated eyetracker then provides a processor with information about the location of eye gaze . The
2010-07-01
imagery, persistent sensor array I. Introduction New device fabrication technologies and heterogeneous embedded processors have led to the emergence of a...geometric occlusions between target and sensor , motion blur, urban scene complexity, and high data volumes. In practical terms the targets are small...distributed airborne narrow-field-of-view video sensor networks. Airborne camera arrays combined with com- putational photography techniques enable the
National Radar Conference, Los Angeles, CA, March 12, 13, 1986, Proceedings
NASA Astrophysics Data System (ADS)
The topics discussed include radar systems, radar subsystems, and radar signal processing. Papers are presented on millimeter wave radar for proximity fuzing of smart munitions, a solid state low pulse power ground surveillance radar, and the Radarsat prototype synthetic-aperture radar signal processor. Consideration is also given to automatic track quality assessment in ADT radar systems instrumentation of RCS measurements of modulation spectra of aircraft blades.
Multiprocessing MCNP on an IBM RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-01-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl's Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl's Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less
Vehicle-borne IED detection using the ULTOR correlation processor
NASA Astrophysics Data System (ADS)
Burcham, Joel D.; Vachon, Joyce E.
2006-05-01
Advanced Optical Systems, Inc. developed the ULTOR(r) system, a real-time correlation processor that looks for improvised explosive devices (IED) by examining imagery of vehicles. The system determines the level of threat an approaching vehicle may represent. The system works on incoming video collected at different wavelengths, including visible, infrared, and synthetic aperture radar. Sensors that attach to ULTOR can be located wherever necessary to improve the safety around a checkpoint. When a suspect vehicle is detected, ULTOR can track the vehicle, alert personnel, check for previous instances of the vehicle, and update other networked systems with the threat information. The ULTOR processing engine focuses on the spatial frequency information available in the image. It correlates the imagery with templates that specify the criteria defining a suspect vehicle. It can perform full field correlations at a rate of 180 Hz or better. Additionally, the spatial frequency information is applied to a trained neural network to identify suspect vehicles. We have performed various laboratory and field experiments to verify the performance of the ULTOR system in a counter IED environment. The experiments cover tracking specific targets in video clips to demonstrating real-time ULTOR system performance. The selected targets in the experiments include various automobiles in both visible and infrared video.
Reconfigurable lattice mesh designs for programmable photonic processors.
Pérez, Daniel; Gasulla, Ivana; Capmany, José; Soref, Richard A
2016-05-30
We propose and analyse two novel mesh design geometries for the implementation of tunable optical cores in programmable photonic processors. These geometries are the hexagonal and the triangular lattice. They are compared here to a previously proposed square mesh topology in terms of a series of figures of merit that account for metrics that are relevant to on-chip integration of the mesh. We find that that the hexagonal mesh is the most suitable option of the three considered for the implementation of the reconfigurable optical core in the programmable processor.
Range Measurement as Practiced in the Deep Space Network
NASA Technical Reports Server (NTRS)
Berner, Jeff B.; Bryant, Scott H.; Kinman, Peter W.
2007-01-01
Range measurements are used to improve the trajectory models of spacecraft tracked by the Deep Space Network. The unique challenge of deep-space ranging is that the two-way delay is long, typically many minutes, and the signal-to-noise ratio is small. Accurate measurements are made under these circumstances by means of long correlations that incorporate Doppler rate-aiding. This processing is done with commercial digital signal processors, providing a flexibility in signal design that can accommodate both the traditional sequential ranging signal and pseudonoise range codes. Accurate range determination requires the calibration of the delay within the tracking station. Measurements with a standard deviation of 1 m have been made.
Science& Technology Review November 2003
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMahon, D
2003-11-01
This issue of Science & Technology Review covers the following topics: (1) We Will Always Need Basic Science--Commentary by Tomas Diaz de la Rubia; (2) When Semiconductors Go Nano--experiments and computer simulations reveal some surprising behavior of semiconductors at the nanoscale; (3) Retinal Prosthesis Provides Hope for Restoring Sight--A microelectrode array is being developed for a retinal prosthesis; (4) Maglev on the Development Track for Urban Transportation--Inductrack, a Livermore concept to levitate train cars using permanent magnets, will be demonstrated on a 120-meter-long test track; and (5) Power Plant on a Chip Moves Closer to Reality--Laboratory-designed fuel processor gives powermore » boost to dime-size fuel cell.« less
Towards Autonomous Operations of the Robonaut 2 Humanoid Robotic Testbed
NASA Technical Reports Server (NTRS)
Badger, Julia; Nguyen, Vienny; Mehling, Joshua; Hambuchen, Kimberly; Diftler, Myron; Luna, Ryan; Baker, William; Joyce, Charles
2016-01-01
The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012. Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators ("legs"), more capable processors, and new sensors, as shown in Figure 1. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions. Technology development has thus far followed two main paths, autonomous climbing and efficient tool manipulation. Central to both technologies has been the incorporation of a human robotic interaction paradigm that involves the visualization of sensory and pre-planned command data with models of the robot and its environment. Figure 2 shows screenshots of these interactive tools, built in rviz, that are used to develop and implement these technologies on R2. Robonaut 2 is designed to move along the handrails and seat track around the US lab inside the ISS. This is difficult for many reasons, namely the environment is cluttered and constrained, the robot has many degrees of freedom (DOF) it can utilize for climbing, and remote commanding for precision tasks such as grasping handrails is time-consuming and difficult. Because of this, it is important to develop the technologies needed to allow the robot to reach operator-specified positions as autonomously as possible. The most important progress in this area has been the work towards efficient path planning for high DOF, highly constrained systems. Other advances include machine vision algorithms for localizing and automatically docking with handrails, the ability of the operator to place obstacles in the robot's virtual environment, autonomous obstacle avoidance techniques, and constraint management.
Small-strip Thin Gap Chambers for the muon spectrometer upgrade of the ATLAS experiment
NASA Astrophysics Data System (ADS)
Perez Codina, E.; ATLAS Muon Collaboration
2016-07-01
The ATLAS muon system upgrade to be installed during the LHC long shutdown in 2018/19, the so-called New Small Wheel (NSW), is designed to cope with the increased instantaneous luminosity in LHC Run 3. The small-strip Thin Gap Chambers (sTGC) will provide the NSW with a fast trigger and high precision tracking. The construction protocol has been validated by test beam experiments on a full-size prototype sTGC detector, showing the performance requirements are met. The intrinsic spatial resolution for a single layer has been found to be about 45 μm for a perpendicular incident angle, and the transition region between pads has been measured to be about 4 mm.
A simulation framework for the CMS Track Trigger electronics
NASA Astrophysics Data System (ADS)
Amstutz, C.; Magazzù, G.; Weber, M.; Palla, F.
2015-03-01
A simulation framework has been developed to test and characterize algorithms, architectures and hardware implementations of the vastly complex CMS Track Trigger for the high luminosity upgrade of the CMS experiment at the Large Hadron Collider in Geneva. High-level SystemC models of all system components have been developed to simulate a portion of the track trigger. The simulation of the system components together with input data from physics simulations allows evaluating figures of merit, like delays or bandwidths, under realistic conditions. The use of SystemC for high-level modelling allows co-simulation with models developed in Hardware Description Languages, e.g. VHDL or Verilog. Therefore, the simulation framework can also be used as a test bench for digital modules developed for the final system.
Investigation of TM Band-to-band Registration Using the JSC Registration Processor
NASA Technical Reports Server (NTRS)
Yao, S. S.; Amis, M. L.
1984-01-01
The JSC registration processor performs scene-to-scene (or band-to-band) correlation based on edge images. The edge images are derived from a percentage of the edge pixels calculated from the raw scene data, excluding clouds and other extraneous data in the scene. Correlations are performed on patches (blocks) of the edge images, and the correlation peak location in each patch is estimated iteratively to fractional pixel location accuracy. Peak offset locations from all patches over the scene are then considered together, and a variety of tests are made to weed out outliers and other inconsistencies before a distortion model is assumed. Thus, the correlation peak offset locations in each patch indicate quantitatively how well the two TM bands register to each other over that patch of scene data. The average of these offsets indicate the overall accuracies of the band-to-band registration. The registration processor was also used to register one acquisition to another acquisition of multitemporal TM data acquired over the same ground track. Band 4 images from both acquisitions were correlated and an rms error of a fraction of a pixel was routinely obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, M; Penjweini, R; Zhu, T
Purpose: Photodynamic therapy (PDT) is used in conjunction with surgical debulking of tumorous tissue during treatment for pleural mesothelioma. One of the key components of effective PDT is uniform light distribution. Currently, light is monitored with 8 isotropic light detectors that are placed at specific locations inside the pleural cavity. A tracking system with real-time feedback software can be utilized to improve the uniformity of light in addition to the existing detectors. Methods: An infrared (IR) tracking camera is used to monitor the movement of the light source. The same system determines the pleural geometry of the treatment area. Softwaremore » upgrades allow visualization of the pleural cavity as a two-dimensional volume. The treatment delivery wand was upgraded for ease of light delivery while incorporating the IR system. Isotropic detector locations are also displayed. Data from the tracking system is used to calculate the light fluence rate delivered. This data is also compared with in vivo data collected via the isotropic detectors. Furthermore, treatment volume information will be used to form light dose volume histograms of the pleural cavity. Results: In a phantom study, the light distribution was improved by using real-time guidance compared to the distribution when using detectors without guidance. With the tracking system, 2D data can be collected regarding light fluence rather than just the 8 discrete locations inside the pleural cavity. Light fluence distribution on the entire cavity can be calculated at every time in the treatment. Conclusion: The IR camera has been used successfully during pleural PDT patient treatment to track the motion of the light source and provide real-time display of 2D light fluence. It is possible to use the feedback system to deliver a more uniform dose of light throughout the pleural cavity.« less
A high-rate PCI-based telemetry processor system
NASA Astrophysics Data System (ADS)
Turri, R.
2002-07-01
The high performances reached by the Satellite on-board telemetry generation and transmission, as consequently, will impose the design of ground facilities with higher processing capabilities at low cost to allow a good diffusion of these ground station. The equipment normally used are based on complex, proprietary bus and computing architectures that prevent the systems from exploiting the continuous and rapid increasing in computing power available on market. The PCI bus systems now allow processing of high-rate data streams in a standard PC-system. At the same time the Windows NT operating system supports multitasking and symmetric multiprocessing, giving the capability to process high data rate signals. In addition, high-speed networking, 64 bit PCI-bus technologies and the increase in processor power and software, allow creating a system based on COTS products (which in future may be easily and inexpensively upgraded). In the frame of EUCLID RTP 9.8 project, a specific work element was dedicated to develop the architecture of a system able to acquire telemetry data of up to 600 Mbps. Laben S.p.A - a Finmeccanica Company -, entrusted of this work, has designed a PCI-based telemetry system making possible the communication between a satellite down-link and a wide area network at the required rate.
Application of parallelized software architecture to an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Shakya, Rahul; Wright, Adam; Shin, Young Ho; Momin, Orko; Petkovsek, Steven; Wortman, Paul; Gautam, Prasanna; Norton, Adam
2011-01-01
This paper presents improvements made to Q, an autonomous ground vehicle designed to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2010 IGVC, Q was upgraded with a new parallelized software architecture and a new vision processor. Improvements were made to the power system reducing the number of batteries required for operation from six to one. In previous years, a single state machine was used to execute the bulk of processing activities including sensor interfacing, data processing, path planning, navigation algorithms and motor control. This inefficient approach led to poor software performance and made it difficult to maintain or modify. For IGVC 2010, the team implemented a modular parallel architecture using the National Instruments (NI) LabVIEW programming language. The new architecture divides all the necessary tasks - motor control, navigation, sensor data collection, etc. into well-organized components that execute in parallel, providing considerable flexibility and facilitating efficient use of processing power. Computer vision is used to detect white lines on the ground and determine their location relative to the robot. With the new vision processor and some optimization of the image processing algorithm used last year, two frames can be acquired and processed in 70ms. With all these improvements, Q placed 2nd in the autonomous challenge.
Metsahovi Radio Observatory - IVS Network Station
NASA Technical Reports Server (NTRS)
Uunila, Minttu; Zubko, Nataliya; Poutanen, Markku; Kallunki, Juha; Kallio, Ulla
2013-01-01
In 2012, Metsahovi Radio Observatory together with Finnish Geodetic Institute officially became an IVS Network Station. Eight IVS sessions were observed during the year. Two spacecraft tracking and one EVN X-band experiment were also performed. In 2012, the Metsahovi VLBI equipment was upgraded with a Digital Base Band Converter, a Mark 5B+, a FILA10G, and a FlexBuff.
T-Violation experiment using polarized Li-8 at TRIUMF
NASA Astrophysics Data System (ADS)
Murata, Jiro; MTV Collaboration
2014-09-01
The MTV experiment searching T-Violating electron transverse polarization in polarized nuclear beta decay at TRIUMF is running. The main electron tracking detector as a Mott polarimeter was upgraded from a planer drift chamber to a cylindrical drift chamber (CDC), which has been commissioned and tested. In this talk, preparation status of the next physics production using the CDC will be presented.
2005-08-01
of a magnetic tracking device to kinesiology studies. J. Biomech. 21(7), 613-620, 1988. Bryant, J.T. Stevenson, J.M., Pelot, R.P., Reid, S.A...kangaroos (Macropus rufus). Comparative Biochemistry and Physiology, Part B 120:41-49, 1998. Kram, R. Are efficiency and the cost of generating force both
Trigger drift chamber for the upgraded mark II detector at PEP
NASA Astrophysics Data System (ADS)
Ford, W. T.; Smith, J. G.; Wagner, S. R.; Weber, P.; White, S. L.; Alvarez, M.; Calviño, F.; Fernandez, E.
1987-04-01
A small cylindrical track detector was built as an array of single-wire drift cells with aluminized mylar cathode tubes. Point measurement resolution of ˜ 90 μm was achieved with a drift gas of 50% argon-50% ethane at atmospheric pressure. The chamber construction, electronics, and calibration are discussed. Performance results from PEP colliding-beam data are presented.
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
Surrogate: A Body-Dexterous Mobile Manipulation Robot with a Tracked Base
NASA Technical Reports Server (NTRS)
Hebert, Paul (Inventor); Borders, James W. (Inventor); Hudson, Nicolas H. (Inventor); Kennedy, Brett A. (Inventor); Ma, Jeremy C. (Inventor); Bergh, Charles F. (Inventor)
2018-01-01
Robotics platforms in accordance with various embodiments of the invention can be utilized to implement highly dexterous robots capable of whole body motion. Robotics platforms in accordance with one embodiment of the invention include: a memory containing a whole body motion application; a spine, where the spine has seven degrees of freedom and comprises a spine actuator and three spine elbow joints that each include two spine joint actuators; at least one limb, where the at least one limb comprises a limb actuator and three limb elbow joints that each include two limb joint actuators; a tracked base; a connecting structure that connects the at least one limb to the spine; a second connecting structure that connects the spine to the tracked base; wherein the processor is configured by the whole body motion application to move the at least one limb and the spine to perform whole body motion.
Adaptive DIT-Based Fringe Tracking and Prediction at IOTA
NASA Technical Reports Server (NTRS)
Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.
2004-01-01
An automatic fringe tracking system has been developed and implemented at the Infrared Optical Telescope Array (IOTA). In testing during May 2002, the system successfully minimized the optical path differences (OPDs) for all three baselines at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHZ PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. Preliminary analysis on an extension of this algorithm indicates a potential for predictive tracking, although at present, real-time implementation of this extension would require significantly more computational capacity.
Compendium of Instrumentation Whitepapers on Frontier Physics Needs for Snowmass 2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipton, R.
2013-01-01
Contents of collection of whitepapers include: Operation of Collider Experiments at High Luminosity; Level 1 Track Triggers at HL-LHC; Tracking and Vertex Detectors for a Muon Collider; Triggers for hadron colliders at the energy frontier; ATLAS Upgrade Instrumentation; Instrumentation for the Energy Frontier; Particle Flow Calorimetry for CMS; Noble Liquid Calorimeters; Hadronic dual-readout calorimetry for high energy colliders; Another Detector for the International Linear Collider; e+e- Linear Colliders Detector Requirements and Limitations; Electromagnetic Calorimetry in Project X Experiments The Project X Physics Study; Intensity Frontier Instrumentation; Project X Physics Study Calorimetry Report; Project X Physics Study Tracking Report; The LHCbmore » Upgrade; Neutrino Detectors Working Group Summary; Advanced Water Cherenkov R&D for WATCHMAN; Liquid Argon Time Projection Chamber (LArTPC); Liquid Scintillator Instrumentation for Physics Frontiers; A readout architecture for 100,000 pixel Microwave Kinetic In- ductance Detector array; Instrumentation for New Measurements of the Cosmic Microwave Background polarization; Future Atmospheric and Water Cherenkov ?-ray Detectors; Dark Energy; Can Columnar Recombination Provide Directional Sensitivity in WIMP Search?; Instrumentation Needs for Detection of Ultra-high Energy Neu- trinos; Low Background Materials for Direct Detection of Dark Matter; Physics Motivation for WIMP Dark Matter Directional Detection; Solid Xenon R&D at Fermilab; Ultra High Energy Neutrinos; Instrumentation Frontier: Direct Detection of WIMPs; nEXO detector R&D; Large Arrays of Air Cherenkov Detectors; and Applications of Laser Interferometry in Fundamental Physics Experiments.« less
The Fermilab Accelerator control system
NASA Astrophysics Data System (ADS)
Bogert, Dixon
1986-06-01
With the advent of the Tevatron, considerable upgrades have been made to the controls of all the Fermilab Accelerators. The current system is based on making as large an amount of data as possible available to many operators or end-users. Specifically there are about 100 000 separate readings, settings, and status and control registers in the various machines, all of which can be accessed by seventeen consoles, some in the Main Control Room and others distributed throughout the complex. A "Host" computer network of approximately eighteen PDP-11/34's, seven PDP-11/44's, and three VAX-11/785's supports a distributed data acquisition system including Lockheed MAC-16's left from the original Main Ring and Booster instrumentation and upwards of 1000 Z80, Z8002, and M68000 microprocessors in dozens of configurations. Interaction of the various parts of the system is via a central data base stored on the disk of one of the VAXes. The primary computer-hardware communication is via CAMAC for the new Tevatron and Antiproton Source; certain subsystems, among them vacuum, refrigeration, and quench protection, reside in the distributed microprocessors and communicate via GAS, an in-house protocol. An important hardware feature is an accurate clock system making a large number of encoded "events" in the accelerator supercycle available for both hardware modules and computers. System software features include the ability to save the current state of the machine or any subsystem and later restore it or compare it with the state at another time, a general logging facility to keep track of specific variables over long periods of time, detection of "exception conditions" and the posting of alarms, and a central filesharing capability in which files on VAX disks are available for access by any of the "Host" processors.
Simple debugging techniques for embedded subsystems
NASA Astrophysics Data System (ADS)
MacPherson, Matthew S.; Martin, Kevin S.
1990-08-01
This paper describes some of the tools and methods used for developing and debugging embedded subsystems at Fermilab. Specifically, these tools have been used for the Flying Wire project and are currently being employed for the New TECAR upgrade. The Flying Wire is a subsystem that swings a wire through the beam in order to measure luminosity and beam density distribution, and TECAR (Tevatron excitation controller and regulator) controls the power-supply ramp generation for the superconducting Tevatron accelerator at Fermilab. In both instances the subsystem hardware consists of a VME crate with one or more processors, shared memory and a network connection to the accelerator control system. Two real-time-operating systems are currently being used: VRTX for the Flying Wire system, and MTOS for New TECAR. The code which runs in these subsystems is a combination of C and assembler and is developed using the Microtec cross-development tools on a VAX 8650 running VMS. This paper explains how multiple debuggers are used to give the greatest possible flexibility from assembly to high-level debugging. Also discussed is how network debugging and network downloading can make a very effective and efficient means of finding bugs in the subsystem environment. The debuggers used are PROBE1, TRACER and the MTOS debugger.
An Empirical Method for Determining the Lunar Gravity Field. Ph.D. Thesis - George Washington Univ.
NASA Technical Reports Server (NTRS)
Ferrari, A. J.
1971-01-01
A method has been devised to determine the spherical harmonic coefficients of the lunar gravity field. This method consists of a two-step data reduction and estimation process. In the first step, a weighted least-squares empirical orbit determination scheme is applied to Doppler tracking data from lunar orbits to estimate long-period Kepler elements and rates. Each of the Kepler elements is represented by an independent function of time. The long-period perturbing effects of the earth, sun, and solar radiation are explicitly modeled in this scheme. Kepler element variations estimated by this empirical processor are ascribed to the non-central lunar gravitation features. Doppler data are reduced in this manner for as many orbits as are available. In the second step, the Kepler element rates are used as input to a second least-squares processor that estimates lunar gravity coefficients using the long-period Lagrange perturbation equations.
The TOPSAR interferometric radar topographic mapping instrument
NASA Technical Reports Server (NTRS)
Zebker, Howard A.; Madsen, Soren N.; Martin, Jan; Alberti, Giovanni; Vetrella, Sergio; Cucci, Alessandro
1992-01-01
The NASA DC-8 AIRSAR instrument was augmented with a pair of C-band antennas displaced across track to form an interferometer sensitive to topographic variations of the Earth's surface. The antennas were developed by the Italian consortium Co.Ri.S.T.A., under contract to the Italian Space Agency (ASI), while the AIRSAR instrument and modifications to it supporting TOPSAR were sponsored by NASA. A new data processor was developed at JPL for producing the topographic maps, and a second processor was developed at Co.Ri.S.T.A. All the results presented below were processed at JPL. During the 1991 DC-8 flight campaign, data were acquired over several sites in the United States and Europe, and topographic maps were produced from several of these flight lines. Analysis of the results indicate that statistical errors are in the 2-3 m range for flat terrain and in the 4-5 m range for mountainous areas.
Knowledge-based vision for space station object motion detection, recognition, and tracking
NASA Technical Reports Server (NTRS)
Symosek, P.; Panda, D.; Yalamanchili, S.; Wehner, W., III
1987-01-01
Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed.
Re-Engineering of the Hubble Space Telescope (HST) to Reduce Operational Costs
NASA Technical Reports Server (NTRS)
Garvis, Michael; Dougherty, Andrew; Whittier, Wallace
1996-01-01
Satellite telemetry processing onboard the Hubble Space Telescope (HST) is carried out using dedicated software and hardware. The current ground system is expensive to operate and maintain. The mandate to reduce satellite ground system operations and maintenance costs by the year 2000 led NASA to upgrade the command and control systems in order to improve the data processing capabilities, reduce operator experience levels and increase system standardization. As a result, a command and control system product development team was formed to redesign and develop the HST ground system. The command and control system ground system development consists of six elements. The results of the prototyping phase carried out for the following of these elements are presented: the front end processor; middleware, and the graphical user interface.
Spacesuit Data Display and Management System
NASA Technical Reports Server (NTRS)
Hall, David G.; Sells, Aaron; Shah, Hemal
2009-01-01
A prototype embedded avionics system has been designed for the next generation of NASA extra-vehicular-activity (EVA) spacesuits. The system performs biomedical and other sensor monitoring, image capture, data display, and data transmission. An existing NASA Phase I and II award winning design for an embedded computing system (ZIN vMetrics - BioWATCH) has been modified. The unit has a reliable, compact form factor with flexible packaging options. These innovations are significant, because current state-of-the-art EVA spacesuits do not provide capability for data displays or embedded data acquisition and management. The Phase 1 effort achieved Technology Readiness Level 4 (high fidelity breadboard demonstration). The breadboard uses a commercial-grade field-programmable gate array (FPGA) with embedded processor core that can be upgraded to a space-rated device for future revisions.
The Modernization of a Long-Focal Length Fringe-Type Laser Velocimeter
NASA Technical Reports Server (NTRS)
Meyers, James F.; Lee, Joseph W.; Cavone, Angelo A.; Fletcher, Mark T.
2012-01-01
A long-focal length laser velocimeter constructed in the early 1980's was upgraded using current technology to improve usability, reliability and future serviceability. The original, free-space optics were replaced with a state-of-the-art fiber-optic subsystem which allowed most of the optics, including the laser, to be remote from the harsh tunnel environment. General purpose high-speed digitizers were incorporated in a standard modular data acquisition system, along with custom signal processing software executed on a desktop computer, served as the replacement for the signal processors. The resulting system increased optical sensitivity with real-time signal/data processing that produced measurement precisions exceeding those of the original system. Monte Carlo simulations, along with laboratory and wind tunnel investigations were used to determine system characteristics and measurement precision.
NASA Astrophysics Data System (ADS)
Kirov, Boian; Batchvarov, Ditchko; Krasteva, Rumiana; Boneva, Ani; Nedkov, Rumen; Klimov, Stanislav; Stainov, Gencho
The advance of the new wireless communications provides additional opportunities for spaceborne experiments. It is now possible to have one basic instrument collecting information from several sensors without burdensome harnessing among them. Besides, the wireless connection among various elements inside the instrument allows the hardware upgrading to be realized without changing globally the whole instrument. In complex experiments consisting of several instruments, the possibility is provided for continuous communication among the instruments, and for optimal choice of the appropriate mode of operation by the central processor. In the present paper, the LP instrument (electrostatic Langmuir probe) is described - an element of "Obstanovka" experiment designed to operate aboard the International Space Station, emphasizing on the use of wireless communication between the sensors and the main instrument.
Energetic Particle Loss Estimates in W7-X
NASA Astrophysics Data System (ADS)
Lazerson, Samuel; Akaslompolo, Simppa; Drevlak, Micheal; Wolf, Robert; Darrow, Douglass; Gates, David; W7-X Team
2017-10-01
The collisionless loss of high energy H+ and D+ ions in the W7-X device are examined using the BEAMS3D code. Simulations of collisionless losses are performed for a large ensemble of particles distributed over various flux surfaces. A clear loss cone of particles is present in the distribution for all particles. These simulations are compared against slowing down simulations in which electron impact, ion impact, and pitch angle scattering are considered. Full device simulations allow tracing of particle trajectories to the first wall components. These simulations provide estimates for placement of a novel set of energetic particle detectors. Recent performance upgrades to the code are allowing simulations with > 1000 processors providing high fidelity simulations. Speedup and future works are discussed. DE-AC02-09CH11466.
Upgrade project and plans for the ATLAS detector and trigger
NASA Astrophysics Data System (ADS)
Pastore, Francesca; Atlas Collaboration
2013-08-01
The LHC is expected to under go upgrades over the coming years in order to extend its scientific potential. Through two different phases (namely Phase-I and Phase-II), the average luminosity will be increased by a factor 5-10 above the design luminosity, 1034 cm-2 s-1. Consequently, the LHC experiments will need upgraded detectors and new infrastructure of the trigger and DAQ systems, to take into account the increase of radiation level and of particle rates foreseen at such high luminosity. In this paper we describe the planned changes and the investigations for the ATLAS experiment, focusing on the requirements for the trigger system to handle the increase rate of collisions per beam crossing, while maintaining widely inclusive selections. In different steps, the trigger detectors will improve their selectivity by benefiting from increased granularity. To improve the flexibility of the system, the use of the tracking information in the lower levels of the trigger selection is also discussed. Lastly different scenarios are compared, based on the expected physics potential of ATLAS in this high luminosity regime.
Timing and tracking for the Crystal Barrel detector
NASA Astrophysics Data System (ADS)
Beck, Reinhard; Brinkmann, Kai; Novotny, Rainer
2017-01-01
The aim of the project D.3 is the upgrade of several detector components used in the CBELSA/TAPS experiment at ELSA. The readout of the Crystal Barrel Calorimeter will be extended by a timing branch in order to gain trigger capability for the detector, which will allow to measure completely neutral final states in photoproduction reactions (see projects A.1 and C.5). Additionally, the readout of the inner crystals of the TAPS detector, which covers the forward opening of the Crystal Barrel Calorimeter, will be modified to be capable of high event rates due to the intensity upgrade of ELSA. Furthermore, a full-scale prototype Time Projection Chamber (TPC) has been built to be used as a new central tracker for the CBELSA/TAPS experiment at ELSA and the FOPI experiment at GSI.
SIMULATIONS OF BOOSTER INJECTION EFFICIENCY FOR THE APS-UPGRADE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calvey, J.; Borland, M.; Harkay, K.
2017-06-25
The APS-Upgrade will require the injector chain to provide high single bunch charge for swap-out injection. One possible limiting factor to achieving this is an observed reduction of injection efficiency into the booster synchrotron at high charge. We have simulated booster injection using the particle tracking code elegant, including a model for the booster impedance and beam loading in the RF cavities. The simulations point to two possible causes for reduced efficiency: energy oscillations leading to losses at high dispersion locations, and a vertical beam size blowup caused by ions in the Particle Accumulator Ring. We also show that themore » efficiency is much higher in an alternate booster lattice with smaller vertical beta function and zero dispersion in the straight sections.« less
The cylindrical GEM detector of the KLOE-2 experiment
NASA Astrophysics Data System (ADS)
Bencivenni, G.; Branchini, P.; Ciambrone, P.; Czerwinski, E.; De Lucia, E.; Di Cicco, A.; Domenici, D.; Felici, G.; Fermani, P.; Morello, G.
2017-07-01
The KLOE-2 experiment started its data taking campaign in November 2014 with an upgraded tracking system at the DAΦNE electron-positron collider at the Frascati National Laboratory of INFN. The new tracking device, the Inner Tracker, operated together with the KLOE-2 Drift Chamber, has been installed to improve track and vertex reconstruction capabilities of the experimental apparatus. The Inner Tracker is a cylindrical GEM detector composed of four cylindrical triple-GEM detectors, each provided with an X-V strips-pads stereo readout. Although GEM detectors are already used in high energy physics experiments, this device is considered a frontier detector due to its fully-cylindrical geometry: KLOE-2 is the first experiment benefiting of this novel detector technology. Alignment and calibration of this detector will be presented together with its operating performance and reconstruction capabilities.
Overview of SCIAMACHY validation: 2002 2004
NASA Astrophysics Data System (ADS)
Piters, A. J. M.; Bramstedt, K.; Lambert, J.-C.; Kirchhoff, B.
2005-08-01
SCIAMACHY, on board Envisat, is now in operation for almost three years. This UV/visible/NIR spectrometer measures the solar irradiance, the earthshine radiance scattered at nadir and from the limb, and the attenuation of solar radiation by the atmosphere during sunrise and sunset, from 240 to 2380 nm and at moderate spectral resolution. Vertical columns and profiles of a variety of atmospheric constituents are inferred from the SCIAMACHY radiometric measurements by dedicated retrieval algorithms. With the support of ESA and several international partners, a methodical SCIAMACHY validation programme has been developed jointly by Germany, the Netherlands and Belgium (the three instrument providing countries) to face complex requirements in terms of measured species, altitude range, spatial and temporal scales, geophysical states and intended scientific applications. This summary paper describes the approach adopted to address those requirements. The actual validation of the operational SCIAMACHY processors established at DLR on behalf of ESA has been hampered by data distribution and processor problems. Since first data releases in summer 2002, operational processors were upgraded regularly and some data products - level-1b spectra, level-2 O3, NO2, BrO and clouds data - have improved significantly. Validation results summarised in this paper conclude that for limited periods and geographical domains they can already be used for atmospheric research. Nevertheless, remaining processor problems cause major errors preventing from scientific usability in other periods and domains. Untied to the constraints of operational processing, seven scientific institutes (BIRA-IASB, IFE, IUP-Heidelberg, KNMI, MPI, SAO and SRON) have developed their own retrieval algorithms and generated SCIAMACHY data products, together addressing nearly all targeted constituents. Most of the UV-visible data products (both columns and profiles) already have acceptable, if not excellent, quality. Several near-infrared column products are still in development but they have already demonstrated their potential for a variety of applications. In any case, scientific users are advised to read carefully validation reports before using the data. It is required and anticipated that SCIAMACHY validation will continue throughout instrument lifetime and beyond. The actual amount of work will obviously depend on funding considerations.
Resistive-strips micromegas detectors with two-dimensional readout
NASA Astrophysics Data System (ADS)
Byszewski, M.; Wotschack, J.
2012-02-01
Micromegas detectors show very good performance for charged particle tracking in high rate environments as for example at the LHC. It is shown that two coordinates can be extracted from a single gas gap in these detectors. Several micromegas chambers with spark protection by resistive strips and two-dimensional readout have been tested in the context of the R&D work for the ATLAS Muon System upgrade.
FY17 Status Report on the Computing Systems for the Yucca Mountain Project TSPA-LA Models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014), Hadgu et al. (2015) and Hadgu and Appel (2016). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) weremore » used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5, 11.1 and 12.0 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA- type analysis on the server cluster. The current tasks included preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 12.0 and address DLL-related issues observed in the FY16 work. The model upgrade task successfully converted the Nominal Modeling case to GoldSim Versions 11.1/12. Conversions of the rest of the TSPA models were also attempted but program and operational difficulties precluded this. Upgrade of the remaining of the modeling cases and distributed processing tasks is expected to continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
D.A. Gates; J.R. Ferron; M. Bell
In 2003, the NSTX plasma control system was used for plasma shape control using real-time equilibrium reconstruction (using the rtEFIT code - J. Ferron, et al., Nucl. Fusion 38 1055 (1998)). rtEFIT is now in routine use for plasma boundary control [D. A. Gates, et al., submitted to Nuclear Fusion (2005)]. More recently, the system has been upgraded to support feedback control of the resistive wall mode (RWM). This paper describes the hardware and software improvements that were made in support of these physics requirements. The real-time data acquisition system now acquires 352 channels of data at 5kHz for eachmore » NSTX plasma discharge. The latency for the data acquisition, which uses the FPDP (Front Panel Data Port) protocol, is measured to be {approx}8 microseconds. A Stand-Alone digitizer (SAD), designed at PPPL, along with an FPDP Input multiplexing module (FIMM) allows for simple modular upgrades. An interface module was built to interface between the FPDP output of the NSTX control system and the legacy Power Conversion link (PCLINK) used for communicating with the PPPL power supplies (first used for TFTR). Additionally a module has been built for communicating with the switching power amplifiers (SPA) recently installed on NSTX. In addition to the hardware developments, the control software [D. Mastrovito, Fusion Eng. And Design 71 65 (2004)] on the NSTX control system has been upgraded. The control computer is an eight processor (8x333MHz G4) built by Sky Computers (Helmsford, MA). The device driver software for the hardware described above will be discussed, as well as the new control algorithms that have been developed to control the switching power supplies for RWM control. An important initial task in RWM feedback is to develop a reliable mode detection algorithm.« less
NASA Astrophysics Data System (ADS)
Wedeking, Gregory A.; Zierer, Joseph J.; Jackson, John R.
2010-07-01
The University of Texas, Center for Electromechanics (UT-CEM) is making a major upgrade to the robotic tracking system on the Hobby Eberly Telescope (HET) as part of theWide Field Upgrade (WFU). The upgrade focuses on a seven-fold increase in payload and necessitated a complete redesign of all tracker supporting structure and motion control systems, including the tracker bridge, ten drive systems, carriage frames, a hexapod, and many other subsystems. The cost and sensitivity of the scientific payload, coupled with the tracker system mass increase, necessitated major upgrades to personnel and hardware safety systems. To optimize kinematic design of the entire tracker, UT-CEM developed novel uses of constraints and drivers to interface with a commercially available CAD package (SolidWorks). For example, to optimize volume usage and minimize obscuration, the CAD software was exercised to accurately determine tracker/hexapod operational space needed to meet science requirements. To verify hexapod controller models, actuator travel requirements were graphically measured and compared to well defined equations of motion for Stewart platforms. To ensure critical hardware safety during various failure modes, UT-CEM engineers developed Visual Basic drivers to interface with the CAD software and quickly tabulate distance measurements between critical pieces of optical hardware and adjacent components for thousands of possible hexapod configurations. These advances and techniques, applicable to any challenging robotic system design, are documented and describe new ways to use commercially available software tools to more clearly define hardware requirements and help insure safe operation.
2010-04-01
for predicting central blood volume changes to focus on the development of software algorithms and systems to provide a capability to track, and...which creatively fills this Critical Care gap. Technology in this sense means hardware and software systems which incorporate sensors, processors...devices for use in forward surgical and combat areas. Mil Med 170: 76-82, 2005. [10] Gaylord KM, Cooper DB, Mercado JM, Kennedy JE, Yoder LH, and
NASA Technical Reports Server (NTRS)
1990-01-01
The Multi-Compatible Network Interface Unit (MCNIU) is intended to connect the space station's communications and tracking, guidance and navigation, life support, electric power, payload data, hand controls, display consoles and other systems, and also communicate with diverse processors. Honeywell is now marketing MCNIU commercially. It has applicability in certain military operations or civil control centers. It has nongovernment utility among large companies, universities and research organizations that transfer large amounts of data among workstations and computers. *This product is no longer commercially available.
Radar Data Processing Using a Distributed Computational System
1992-06-01
objects to processors must reduce Toc (N) (i.e., the time to compute on 85 N nodes) [Ref. 28]. Time spent communicating can represent a degradation of...de Sistemas e Computaq&o, s/ data. [9] Vilhena R. "IntroduqAo aos Algoritmos para Processamento de Marcaq6es e DistAncias", Escola Naval - Notas de...Aula - Automaq&o de Sistemas Navais, s/ data. (101 Averbuch A., Itzikcwitz S., and Kapon T. "Parallel Implementation of Multiple Model Tracking
Multiprocessing MCNP on an IBN RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-01-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors P and the fraction f of task time that multiprocesses, can be formulated using Amdahl's law: S(f, P) =1/(1-f+f/P). However, for most applications, this theoretical limit cannot be achieved because of additional terms (e.g., multitasking overhead, memory overlap, etc.) that are not included in Amdahl's law. Monte Carlo transport is a natural candidate for multiprocessing because the particle tracks are generally independent, and the precision of the result increases as the square Foot of the number of particles tracked.« less
Multiprocessing MCNP on an IBM RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-03-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl`s Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl`s Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less
Adaptive DFT-Based Interferometer Fringe Tracking
NASA Astrophysics Data System (ADS)
Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.
An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) Observatory at Mount Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier-transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on offline data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse. One example of such an application might be to the field of thin-film measurement by ellipsometry, using a broadband light source and a Fourier-transform spectrometer to detect the resulting fringe patterns.
Adaptive track scheduling to optimize concurrency and vectorization in GeantV
Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...
2015-05-22
The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The modelmore » has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. Lastly, this work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.« less
A real-time tracking system of infrared dim and small target based on FPGA and DSP
NASA Astrophysics Data System (ADS)
Rong, Sheng-hui; Zhou, Hui-xin; Qin, Han-lin; Wang, Bing-jian; Qian, Kun
2014-11-01
A core technology in the infrared warning system is the detection tracking of dim and small targets with complicated background. Consequently, running the detection algorithm on the hardware platform has highly practical value in the military field. In this paper, a real-time detection tracking system of infrared dim and small target which is used FPGA (Field Programmable Gate Array) and DSP (Digital Signal Processor) as the core was designed and the corresponding detection tracking algorithm and the signal flow is elaborated. At the first stage, the FPGA obtain the infrared image sequence from the sensor, then it suppresses background clutter by mathematical morphology method and enhances the target intensity by Laplacian of Gaussian operator. At the second stage, the DSP obtain both the original image and the filtered image form the FPGA via the video port. Then it segments the target from the filtered image by an adaptive threshold segmentation method and gets rid of false target by pipeline filter. Experimental results show that our system can achieve higher detection rate and lower false alarm rate.
Adaptive DFT-Based Interferometer Fringe Tracking
NASA Astrophysics Data System (ADS)
Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.
2005-12-01
An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) Observatory at Mount Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier-transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on offline data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately [InlineEquation not available: see fulltext.] milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse. One example of such an application might be to the field of thin-film measurement by ellipsometry, using a broadband light source and a Fourier-transform spectrometer to detect the resulting fringe patterns.
Latest processing status and quality assessment of the GOMOS, MIPAS and SCIAMACHY ESA dataset
NASA Astrophysics Data System (ADS)
Niro, F.; Brizzi, G.; Saavedra de Miguel, L.; Scarpino, G.; Dehn, A.; Fehr, T.; von Kuhlmann, R.
2011-12-01
GOMOS, MIPAS and SCIAMACHY instruments are successfully observing the changing Earth's atmosphere since the launch of the ENVISAT-ESA platform on March 2002. The measurements recorded by these instruments are relevant for the Atmospheric-Chemistry community both in terms of time extent and variety of observing geometry and techniques. In order to fully exploit these measurements, it is crucial to maintain a good reliability in the data processing and distribution and to continuously improving the scientific output. The goal is to meet the evolving needs of both the near-real-time and research applications. Within this frame, the ESA operational processor remains the reference code, although many scientific algorithms are nowadays available to the users. In fact, the ESA algorithm has a well-established calibration and validation scheme, a certified quality assessment process and the possibility to reach a wide users' community. Moreover, the ESA algorithm upgrade procedures and the re-processing performances have much improved during last two years, thanks to the recent updates of the Ground Segment infrastructure and overall organization. The aim of this paper is to promote the usage and stress the quality of the ESA operational dataset for the GOMOS, MIPAS and SCIAMACHY missions. The recent upgrades in the ESA processor (GOMOS V6, MIPAS V5 and SCIAMACHY V5) will be presented, with detailed information on improvements in the scientific output and preliminary validation results. The planned algorithm evolution and on-going re-processing campaigns will be mentioned that involves the adoption of advanced set-up, such as the MIPAS V6 re-processing on a clouds-computing system. Finally, the quality control process will be illustrated that allows to guarantee a standard of quality to the users. In fact, the operational ESA algorithm is carefully tested before switching into operations and the near-real time and off-line production is thoughtfully verified via the implementation of automatic quality control procedures. The scientific validity of the ESA dataset will be additionally illustrated with examples of applications that can be supported, such as ozone-hole monitoring, volcanic ash detection and analysis of atmospheric composition changes during the past years.
Configurable Multi-Purpose Processor
NASA Technical Reports Server (NTRS)
Valencia, J. Emilio; Forney, Chirstopher; Morrison, Robert; Birr, Richard
2010-01-01
Advancements in technology have allowed the miniaturization of systems used in aerospace vehicles. This technology is driven by the need for next-generation systems that provide reliable, responsive, and cost-effective range operations while providing increased capabilities such as simultaneous mission support, increased launch trajectories, improved launch, and landing opportunities, etc. Leveraging the newest technologies, the command and telemetry processor (CTP) concept provides for a compact, flexible, and integrated solution for flight command and telemetry systems and range systems. The CTP is a relatively small circuit board that serves as a processing platform for high dynamic, high vibration environments. The CTP can be reconfigured and reprogrammed, allowing it to be adapted for many different applications. The design is centered around a configurable field-programmable gate array (FPGA) device that contains numerous logic cells that can be used to implement traditional integrated circuits. The FPGA contains two PowerPC processors running the Vx-Works real-time operating system and are used to execute software programs specific to each application. The CTP was designed and developed specifically to provide telemetry functions; namely, the command processing, telemetry processing, and GPS metric tracking of a flight vehicle. However, it can be used as a general-purpose processor board to perform numerous functions implemented in either hardware or software using the FPGA s processors and/or logic cells. Functionally, the CTP was designed for range safety applications where it would ultimately become part of a vehicle s flight termination system. Consequently, the major functions of the CTP are to perform the forward link command processing, GPS metric tracking, return link telemetry data processing, error detection and correction, data encryption/ decryption, and initiate flight termination action commands. Also, the CTP had to be designed to survive and operate in a launch environment. Additionally, the CTP was designed to interface with the WFF (Wallops Flight Facility) custom-designed transceiver board which is used in the Low Cost TDRSS Transceiver (LCT2) also developed by WFF. The LCT2 s transceiver board demodulates commands received from the ground via the forward link and sends them to the CTP, where they are processed. The CTP inputs and processes data from the inertial measurement unit (IMU) and the GPS receiver board, generates status data, and then sends the data to the transceiver board where it is modulated and sent to the ground via the return link. Overall, the CTP has combined processing with the ability to interface to a GPS receiver, an IMU, and a pulse code modulation (PCM) communication link, while providing the capability to support common interfaces including Ethernet and serial interfaces boarding a relatively small-sized, lightweight package.
NASA Astrophysics Data System (ADS)
Long, Jeffrey K.
1989-09-01
This theses developed computer models of two types of amplitude comparison monopulse processors using the Block Oriented System Simulation (BOSS) software package and to determine the response to these models to impulsive input signals. This study was an effort to determine the susceptibility of monopulse tracking radars to impulsing jamming signals. Two types of amplitude comparison monopulse receivers were modeled, one using logarithmic amplifiers and the other using automatic gain control for signal normalization. Simulations of both types of systems were run under various conditions of gain or frequency imbalance between the two receiver channels. The resulting errors from the imbalanced simulations were compared to the outputs of similar, baseline simulations which had no electrical imbalances. The accuracy of both types of processors was directly affected by gain or frequency imbalances in their receiver channels. In most cases, it was possible to generate both positive and negative angular errors, dependent upon the type and degree of mismatch between the channels. The system most susceptible to induced errors was a frequency imbalanced processor which used AGC circuitry. Any errors introduced will be a function of the degree of mismatch between the channels and therefore would be difficult to exploit reliably.
Processor Would Find Best Paths On Map
NASA Technical Reports Server (NTRS)
Eberhardt, Silvio P.
1990-01-01
Proposed very-large-scale integrated (VLSI) circuit image-data processor finds path of least cost from specified origin to any destination on map. Cost of traversal assigned to each picture element of map. Path of least cost from originating picture element to every other picture element computed as path that preserves as much as possible of signal transmitted by originating picture element. Dedicated microprocessor at each picture element stores cost of traversal and performs its share of computations of paths of least cost. Least-cost-path problem occurs in research, military maneuvers, and in planning routes of vehicles.
TRIAC II. A MatLab code for track measurements from SSNT detectors
NASA Astrophysics Data System (ADS)
Patiris, D. L.; Blekas, K.; Ioannides, K. G.
2007-08-01
A computer program named TRIAC II written in MATLAB and running with a friendly GUI has been developed for recognition and parameters measurements of particles' tracks from images of Solid State Nuclear Track Detectors. The program, using image analysis tools, counts the number of tracks and depending on the current working mode classifies them according to their radii (Mode I—circular tracks) or their axis (Mode II—elliptical tracks), their mean intensity value (brightness) and their orientation. Images of the detectors' surfaces are input to the code, which generates text files as output, including the number of counted tracks with the associated track parameters. Hough transform techniques are used for the estimation of the number of tracks and their parameters, providing results even in cases of overlapping tracks. Finally, it is possible for the user to obtain informative histograms as well as output files for each image and/or group of images. Program summaryTitle of program:TRIAC II Catalogue identifier:ADZC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZC_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: Pentium III, 600 MHz Installations: MATLAB 7.0 Operating system under which the program has been tested: Windows XP Programming language used:MATLAB Memory required to execute with typical data:256 MB No. of bits in a word:32 No. of processors used:one Has the code been vectorized or parallelized?:no No. of lines in distributed program, including test data, etc.:25 964 No. of bytes in distributed program including test data, etc.: 4 354 510 Distribution format:tar.gz Additional comments: This program requires the MatLab Statistical toolbox and the Image Processing Toolbox to be installed. Nature of physical problem: Following the passage of a charged particle (protons and heavier) through a Solid State Nuclear Track Detector (SSNTD), a damage region is created, usually named latent track. After the chemical etching of the detectors in aqueous NaOH or KOH solutions, latent tracks can be sufficiently enlarged (with diameters of 1 μm or more) to become visible under an optical microscope. Using the appropriate apparatus, one can record images of the SSNTD's surface. The shapes of the particle's tracks are strongly dependent on their charge, energy and the angle of incidence. Generally, they have elliptical shapes and in the special case of vertical incidence, they are circular. The manual counting of tracks is a tedious and time-consuming task. An automatic system is needed to speed up the process and to increase the accuracy of the results. Method of solution: TRIAC II is based on a segmentation method that groups image pixels according to their intensity value (brightness) in a number of grey level groups. After the segmentation of pixels, the program recognizes and separates the track from the background, subsequently performing image morphology, where oversized objects or objects smaller than a threshold value are removed. Finally, using the appropriate Hough transform technique, the program counts the tracks, even those which overlap and classifies them according to their shape parameters and brightness. Typical running time: The analysis of an image with a PC (Intel Pentium III processor running at 600 MHz) requires 2 to 10 minutes, depending on the number of observed tracks and the digital resolution of the image. Unusual features of the program: This program has been tested with images of CR-39 detectors exposed to alpha particles. Also, in low contrast images with few or small tracks, background pixels can be recognized as track pixels. To avoid this problem the brightness of the background pixels should be sufficiently higher than that of the track pixels.
NASA Astrophysics Data System (ADS)
Mazza, G.; Aglieri Rinella, G.; Benotto, F.; Corrales Morales, Y.; Kugathasan, T.; Lattuca, A.; Lupi, M.; Ravasenga, I.
2017-02-01
The upgrade of the ALICE Inner Tracking System is based on a Monolithic Active Pixel Sensor and ASIC designed in a CMOS 0.18 μ m process. In order to provide the required output bandwidth (1.2 Gb/s for the inner layers and 400 Mb/s for the outer ones) on a single high speed serial link, a custom Data Transmission Unit (DTU) has been developed in the same process. The DTU includes a clock multiplier PLL, a double data rate serializer and a pseudo-LVDS driver with pre-emphasis and is designed to be SEU tolerant.
NASA Astrophysics Data System (ADS)
Kim, D.; Aglieri Rinella, G.; Cavicchioli, C.; Chanlek, N.; Collu, A.; Degerli, Y.; Dorokhov, A.; Flouzat, C.; Gajanana, D.; Gao, C.; Guilloux, F.; Hillemanns, H.; Hristozkov, S.; Junique, A.; Keil, M.; Kofarago, M.; Kugathasan, T.; Kwon, Y.; Lattuca, A.; Mager, M.; Sielewicz, K. M.; Marin Tobon, C. A.; Marras, D.; Martinengo, P.; Mazza, G.; Mugnier, H.; Musa, L.; Pham, T. H.; Puggioni, C.; Reidt, F.; Riedler, P.; Rousset, J.; Siddhanta, S.; Snoeys, W.; Song, M.; Usai, G.; Van Hoorne, J. W.; Yang, P.
2016-02-01
ALICE plans to replace its Inner Tracking System during the second long shut down of the LHC in 2019 with a new 10 m2 tracker constructed entirely with monolithic active pixel sensors. The TowerJazz 180 nm CMOS imaging Sensor process has been selected to produce the sensor as it offers a deep pwell allowing full CMOS in-pixel circuitry and different starting materials. First full-scale prototypes have been fabricated and tested. Radiation tolerance has also been verified. In this paper the development of the charge sensitive front end and in particular its optimization for uniformity of charge threshold and time response will be presented.
Aerogel mass production for the CLAS12 RICH: Novel characterization methods and optical performance
NASA Astrophysics Data System (ADS)
Contalbrigo, M.; Balossino, I.; Barion, L.; Barnyakov, A. Yu.; Battaglia, G.; Danilyuk, A. F.; Katcin, A. A.; Kravchenko, E. A.; Mirazita, M.; Movsisyan, A.; Orecchini, D.; Pappalardo, L. L.; Squerzanti, S.; Tomassini, S.; Turisini, M.
2017-12-01
A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capabilities in the momentum range from 3 GeV/c to 8 GeV/c for the CLAS12 experiments at the Jefferson Lab upgraded 12 GeV continuous electron beam accelerator facility. The adopted solution foresees a novel hybrid optics design based on an aerogel radiator, composite mirrors and densely-packed and highly-segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). The status of the aerogel mass-production and the assessment studies of the aerogel optical performance are here reported.
MULTI-CORE AND OPTICAL PROCESSOR RELATED APPLICATIONS RESEARCH AT OAK RIDGE NATIONAL LABORATORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barhen, Jacob; Kerekes, Ryan A; ST Charles, Jesse Lee
2008-01-01
High-speed parallelization of common tasks holds great promise as a low-risk approach to achieving the significant increases in signal processing and computational performance required for next generation innovations in reconfigurable radio systems. Researchers at the Oak Ridge National Laboratory have been working on exploiting the parallelization offered by this emerging technology and applying it to a variety of problems. This paper will highlight recent experience with four different parallel processors applied to signal processing tasks that are directly relevant to signal processing required for SDR/CR waveforms. The first is the EnLight Optical Core Processor applied to matched filter (MF) correlationmore » processing via fast Fourier transform (FFT) of broadband Dopplersensitive waveforms (DSW) using active sonar arrays for target tracking. The second is the IBM CELL Broadband Engine applied to 2-D discrete Fourier transform (DFT) kernel for image processing and frequency domain processing. And the third is the NVIDIA graphical processor applied to document feature clustering. EnLight Optical Core Processor. Optical processing is inherently capable of high-parallelism that can be translated to very high performance, low power dissipation computing. The EnLight 256 is a small form factor signal processing chip (5x5 cm2) with a digital optical core that is being developed by an Israeli startup company. As part of its evaluation of foreign technology, ORNL's Center for Engineering Science Advanced Research (CESAR) had access to a precursor EnLight 64 Alpha hardware for a preliminary assessment of capabilities in terms of large Fourier transforms for matched filter banks and on applications related to Doppler-sensitive waveforms. This processor is optimized for array operations, which it performs in fixed-point arithmetic at the rate of 16 TeraOPS at 8-bit precision. This is approximately 1000 times faster than the fastest DSP available today. The optical core performs the matrix-vector multiplications, where the nominal matrix size is 256x256. The system clock is 125MHz. At each clock cycle, 128K multiply-and-add operations per second (OPS) are carried out, which yields a peak performance of 16 TeraOPS. IBM Cell Broadband Engine. The Cell processor is the extraordinary resulting product of 5 years of sustained, intensive R&D collaboration (involving over $400M investment) between IBM, Sony, and Toshiba. Its architecture comprises one multithreaded 64-bit PowerPC processor element (PPE) with VMX capabilities and two levels of globally coherent cache, and 8 synergistic processor elements (SPEs). Each SPE consists of a processor (SPU) designed for streaming workloads, local memory, and a globally coherent direct memory access (DMA) engine. Computations are performed in 128-bit wide single instruction multiple data streams (SIMD). An integrated high-bandwidth element interconnect bus (EIB) connects the nine processors and their ports to external memory and to system I/O. The Applied Software Engineering Research (ASER) Group at the ORNL is applying the Cell to a variety of text and image analysis applications. Research on Cell-equipped PlayStation3 (PS3) consoles has led to the development of a correlation-based image recognition engine that enables a single PS3 to process images at more than 10X the speed of state-of-the-art single-core processors. NVIDIA Graphics Processing Units. The ASER group is also employing the latest NVIDIA graphical processing units (GPUs) to accelerate clustering of thousands of text documents using recently developed clustering algorithms such as document flocking and affinity propagation.« less
A new strips tracker for the upgraded ATLAS ITk detector
NASA Astrophysics Data System (ADS)
David, C.
2018-01-01
The ATLAS detector has been designed and developed to function in the environment of the present Large Hadron Collider (LHC). At the next-generation tracking detector proposed for the High Luminosity LHC (HL-LHC), the so-called ATLAS Phase-II Upgrade, the fluences and radiation levels will be higher by as much as a factor of ten. The new sub-detectors must thus be faster, of larger area, more segmented and more radiation hard while the amount of inactive material should be minimized and the power supply to the front-end systems should be increased. For those reasons, the current inner tracker of the ATLAS detector will be fully replaced by an all-silicon tracking system that consists of a pixel detector at small radius close to the beam line and a large area strip tracker surrounding it. This document gives an overview of the design of the strip inner tracker (Strip ITk) and summarises the intensive R&D activities performed over the last years by the numerous institutes within the Strips ITk collaboration. These studies are accompanied with a strong prototyping effort to contribute to the optimisation of the Strip ITk's structure and components. This effort culminated recently in the release of the ATLAS Strips ITk Technical Design Report (TDR).
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Experience with a Genetic Algorithm Implemented on a Multiprocessor Computer
NASA Technical Reports Server (NTRS)
Plassman, Gerald E.; Sobieszczanski-Sobieski, Jaroslaw
2000-01-01
Numerical experiments were conducted to find out the extent to which a Genetic Algorithm (GA) may benefit from a multiprocessor implementation, considering, on one hand, that analyses of individual designs in a population are independent of each other so that they may be executed concurrently on separate processors, and, on the other hand, that there are some operations in a GA that cannot be so distributed. The algorithm experimented with was based on a gaussian distribution rather than bit exchange in the GA reproductive mechanism, and the test case was a hub frame structure of up to 1080 design variables. The experimentation engaging up to 128 processors confirmed expectations of radical elapsed time reductions comparing to a conventional single processor implementation. It also demonstrated that the time spent in the non-distributable parts of the algorithm and the attendant cross-processor communication may have a very detrimental effect on the efficient utilization of the multiprocessor machine and on the number of processors that can be used effectively in a concurrent manner. Three techniques were devised and tested to mitigate that effect, resulting in efficiency increasing to exceed 99 percent.
Satellite tracking and earth dynamics research programs
NASA Technical Reports Server (NTRS)
1982-01-01
The SAO laser site in Arequipa continued routine operations throughout the reporting period except for the months of March and April when upgrading was underway. The laser in Orroral Valley was operational through March. Together with the cooperating stations in Wettzell, Grasse, Kootwikj, San Fernando, Helwan, and Metsahove the laser stations obtained a total of 37,099 quick-look observations on 978 passes of BE-C, Starlette, and LAGEOS. The Network continued to track LAGEOS at highest priority for polar motion and Earth rotation studies, and for other geophysical investigations, including crustal dynamics, Earth and ocean tides, and the general development of precision orbit determination. The Network performed regular tracking of BE-C and Starlette for refined determinations of station coordinate and the Earth's gravity field and for studies of solid earth dynamics. Monthly statistics of the passes and points are given by station and by satellite.
Satellite-tracking and Earth dynamics research programs
NASA Technical Reports Server (NTRS)
1982-01-01
The activities carried out by the Smithsonian Astrophysical Observatory (SAO) are described. The SAO network continued to track LAGEOS at highest priority for polar motion and Earth rotation studies, and for other geophysical investigations, including crustal dynamics, Earth and ocean tides, and the general development of precision orbit determination. The network performed regular tracking of several other retroreflector satellites including GEOS-1, GEOS-3, BE-C, and Starlette for refined determinations of station coordinates and the Earth's gravity field and for studies of solid Earth dynamics. A major program in laser upgrading continued to improve ranging accuracy and data yield. This program includes an increase in pulse repetition rate from 8 ppm to 30 ppm, a reduction in laser pulse width from 6 nsec to 2 to 3 nsec, improvements in the photoreceiver and the electronics to improve daylight ranging, and an analog pulse detection system to improve range noise and accuracy. Data processing hardware and software are discussed.
A new ATLAS muon CSC readout system with system on chip technology on ATCA platform
Claus, R.
2015-10-23
The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQmore » building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. Furthermore, the full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.« less
A new ATLAS muon CSC readout system with system on chip technology on ATCA platform
NASA Astrophysics Data System (ADS)
Claus, R.; ATLAS Collaboration
2016-07-01
The ATLAS muon Cathode Strip Chamber (CSC) back-end readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run 2 luminosity. The readout design is based on the Reconfiguration Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the ATCA platform. The RCE design is based on the new System on Chip Xilinx Zynq series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources together with auxiliary memories to form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the Zynq for G-link, S-link and TTC allowed the full system of 320 G-links from the 32 chambers to be processed by 6 COBs in one ATCA shelf through software waveform feature extraction to output 32 S-links. The full system was installed in Sept. 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning towards LHC Run 2.
A new ATLAS muon CSC readout system with system on chip technology on ATCA platform
NASA Astrophysics Data System (ADS)
Bartoldus, R.; Claus, R.; Garelli, N.; Herbst, R. T.; Huffer, M.; Iakovidis, G.; Iordanidou, K.; Kwan, K.; Kocian, M.; Lankford, A. J.; Moschovakos, P.; Nelson, A.; Ntekas, K.; Ruckman, L.; Russell, J.; Schernau, M.; Schlenker, S.; Su, D.; Valderanis, C.; Wittgen, M.; Yildiz, S. C.
2016-01-01
The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. We will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2.
A new ATLAS muon CSC readout system with system on chip technology on ATCA platform
Bartoldus, R.; Claus, R.; Garelli, N.; ...
2016-01-25
The ATLAS muon Cathode Strip Chamber (CSC) backend readout system has been upgraded during the LHC 2013-2015 shutdown to be able to handle the higher Level-1 trigger rate of 100 kHz and the higher occupancy at Run-2 luminosity. The readout design is based on the Reconfigurable Cluster Element (RCE) concept for high bandwidth generic DAQ implemented on the Advanced Telecommunication Computing Architecture (ATCA) platform. The RCE design is based on the new System on Chip XILINX ZYNQ series with a processor-centric architecture with ARM processor embedded in FPGA fabric and high speed I/O resources. Together with auxiliary memories, all ofmore » these components form a versatile DAQ building block that can host applications tapping into both software and firmware resources. The Cluster on Board (COB) ATCA carrier hosts RCE mezzanines and an embedded Fulcrum network switch to form an online DAQ processing cluster. More compact firmware solutions on the ZYNQ for high speed input and output fiberoptic links and TTC allowed the full system of 320 input links from the 32 chambers to be processed by 6 COBs in one ATCA shelf. The full system was installed in September 2014. In conclusion, we will present the RCE/COB design concept, the firmware and software processing architecture, and the experience from the intense commissioning for LHC Run 2.« less
Requirements for benchmarking personal image retrieval systems
NASA Astrophysics Data System (ADS)
Bouguet, Jean-Yves; Dulong, Carole; Kozintsev, Igor; Wu, Yi
2006-01-01
It is now common to have accumulated tens of thousands of personal ictures. Efficient access to that many pictures can only be done with a robust image retrieval system. This application is of high interest to Intel processor architects. It is highly compute intensive, and could motivate end users to upgrade their personal computers to the next generations of processors. A key question is how to assess the robustness of a personal image retrieval system. Personal image databases are very different from digital libraries that have been used by many Content Based Image Retrieval Systems.1 For example a personal image database has a lot of pictures of people, but a small set of different people typically family, relatives, and friends. Pictures are taken in a limited set of places like home, work, school, and vacation destination. The most frequent queries are searched for people, and for places. These attributes, and many others affect how a personal image retrieval system should be benchmarked, and benchmarks need to be different from existing ones based on art images, or medical images for examples. The attributes of the data set do not change the list of components needed for the benchmarking of such systems as specified in2: - data sets - query tasks - ground truth - evaluation measures - benchmarking events. This paper proposed a way to build these components to be representative of personal image databases, and of the corresponding usage models.
Cascade Distiller System Performance Testing Interim Results
NASA Technical Reports Server (NTRS)
Callahan, Michael R.; Pensinger, Stuart; Sargusingh, Miriam J.
2014-01-01
The Cascade Distillation System (CDS) is a rotary distillation system with potential for greater reliability and lower energy costs than existing distillation systems. Based upon the results of the 2009 distillation comparison test (DCT) and recommendations of the expert panel, the Advanced Exploration Systems (AES) Water Recovery Project (WRP) project advanced the technology by increasing reliability of the system through redesign of bearing assemblies and improved rotor dynamics. In addition, the project improved the CDS power efficiency by optimizing the thermoelectric heat pump (TeHP) and heat exchanger design. Testing at the NASA-JSC Advanced Exploration System Water Laboratory (AES Water Lab) using a prototype Cascade Distillation Subsystem (CDS) wastewater processor (Honeywell d International, Torrance, Calif.) with test support equipment and control system developed by Johnson Space Center was performed to evaluate performance of the system with the upgrades as compared to previous system performance. The system was challenged with Solution 1 from the NASA Exploration Life Support (ELS) distillation comparison testing performed in 2009. Solution 1 consisted of a mixed stream containing human-generated urine and humidity condensate. A secondary objective of this testing is to evaluate the performance of the CDS as compared to the state of the art Distillation Assembly (DA) used in the ISS Urine Processor Assembly (UPA). This was done by challenging the system with ISS analog waste streams. This paper details the results of the AES WRP CDS performance testing.
GRAMM-X public web server for protein–protein docking
Tovchigrechko, Andrey; Vakser, Ilya A.
2006-01-01
Protein docking software GRAMM-X and its web interface () extend the original GRAMM Fast Fourier Transformation methodology by employing smoothed potentials, refinement stage, and knowledge-based scoring. The web server frees users from complex installation of database-dependent parallel software and maintaining large hardware resources needed for protein docking simulations. Docking problems submitted to GRAMM-X server are processed by a 320 processor Linux cluster. The server was extensively tested by benchmarking, several months of public use, and participation in the CAPRI server track. PMID:16845016
A Simulation Analysis of an Automated Identification Processor for the Tactical Air Control System.
1986-06-01
available at the work station for the M&I operators to identify aircraft. Some data is provided via the console such as the IFF/SIF and the airspace control...factors led to the development of efficient work stations for the functional positions in the air defense mission. Experimental Design Experiments are...techniques that helped keep the thesis work "on track"! The Research Design The research plan or design of this thesis effort is not unique. In fact
Object classification for obstacle avoidance
NASA Astrophysics Data System (ADS)
Regensburger, Uwe; Graefe, Volker
1991-03-01
Object recognition is necessary for any mobile robot operating autonomously in the real world. This paper discusses an object classifier based on a 2-D object model. Obstacle candidates are tracked and analyzed false alarms generated by the object detector are recognized and rejected. The methods have been implemented on a multi-processor system and tested in real-world experiments. They work reliably under favorable conditions but sometimes problems occur e. g. when objects contain many features (edges) or move in front of structured background.
Collaborative writing: Tools and tips.
Eapen, Bell Raj
2007-01-01
Majority of technical writing is done by groups of experts and various web based applications have made this collaboration easy. Email exchange of word processor documents with tracked changes used to be the standard technique for collaborative writing. However web based tools like Google docs and Spreadsheets have made the process fast and efficient. Various versioning tools and synchronous editors are available for those who need additional functionality. Having a group leader who decides the scheduling, communication and conflict resolving protocols is important for successful collaboration.
Keys to the House: Unlocking Residential Savings With Program Models for Home Energy Upgrades
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grevatt, Jim; Hoffman, Ian; Hoffmeyer, Dale
After more than 40 years of effort, energy efficiency program administrators and associated contractors still find it challenging to penetrate the home retrofit market, especially at levels commensurate with state and federal goals for energy savings and emissions reductions. Residential retrofit programs further have not coalesced around a reliably successful model. They still vary in design, implementation and performance, and they remain among the more difficult and costly options for acquiring savings in the residential sector. If programs are to contribute fully to meeting resource and policy objectives, administrators need to understand what program elements are key to acquiring residentialmore » savings as cost effectively as possible. To that end, the U.S. Department of Energy (DOE) sponsored a comprehensive review and analysis of home energy upgrade programs with proven track records, focusing on those with robustly verified savings and constituting good examples for replication. The study team reviewed evaluations for the period 2010 to 2014 for 134 programs that are funded by customers of investor-owned utilities. All are programs that promote multi-measure retrofits or major system upgrades. We paid particular attention to useful design and implementation features, costs, and savings for nearly 30 programs with rigorous evaluations of performance. This meta-analysis describes program models and implementation strategies for (1) direct install retrofits; (2) heating, ventilating and air-conditioning (HVAC) replacement and early retirement; and (3) comprehensive, whole-home retrofits. We analyze costs and impacts of these program models, in terms of both energy savings and emissions avoided. These program models can be useful guides as states consider expanding their strategies for acquiring energy savings as a resource and for emissions reductions. We also discuss the challenges of using evaluations to create program models that can be confidently applied in multiple jurisdictions.« less
Digital Low Level RF Systems for Fermilab Main Ring and Tevatron
NASA Astrophysics Data System (ADS)
Chase, B.; Barnes, B.; Meisner, K.
1997-05-01
At Fermilab, a new Low Level RF system is successfully installed and operating in the Main Ring. Installation is proceeding for a Tevatron system. This upgrade replaces aging CAMAC/NIM components for an increase in accuracy, reliability, and flexibility. These VXI systems are based on a custom three channel direct digital synthesizer(DDS) module. Each synthesizer channel is capable of independent or ganged operation for both frequency and phase modulation. New frequency and phase values are computed at a 100kHz rate on the module's Analog Devices ADSP21062 (SHARC) digital signal processor. The DSP concurrently handles feedforward, feedback, and beam manipulations. Higher level state machines and the control system interface are handled at the crate level using the VxWorks operating system. This paper discusses the hardware, software and operational aspects of these LLRF systems.
Mobile robotic sensors for perimeter detection and tracking.
Clark, Justin; Fierro, Rafael
2007-02-01
Mobile robot/sensor networks have emerged as tools for environmental monitoring, search and rescue, exploration and mapping, evaluation of civil infrastructure, and military operations. These networks consist of many sensors each equipped with embedded processors, wireless communication, and motion capabilities. This paper describes a cooperative mobile robot network capable of detecting and tracking a perimeter defined by a certain substance (e.g., a chemical spill) in the environment. Specifically, the contributions of this paper are twofold: (i) a library of simple reactive motion control algorithms and (ii) a coordination mechanism for effectively carrying out perimeter-sensing missions. The decentralized nature of the methodology implemented could potentially allow the network to scale to many sensors and to reconfigure when adding/deleting sensors. Extensive simulation results and experiments verify the validity of the proposed cooperative control scheme.
Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.
Yuan, J; Moses, G A; McKenty, P W
2005-10-01
A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.
GeoTrack: bio-inspired global video tracking by networks of unmanned aircraft systems
NASA Astrophysics Data System (ADS)
Barooah, Prabir; Collins, Gaemus E.; Hespanha, João P.
2009-05-01
Research from the Institute for Collaborative Biotechnologies (ICB) at the University of California at Santa Barbara (UCSB) has identified swarming algorithms used by flocks of birds and schools of fish that enable these animals to move in tight formation and cooperatively track prey with minimal estimation errors, while relying solely on local communication between the animals. This paper describes ongoing work by UCSB, the University of Florida (UF), and the Toyon Research Corporation on the utilization of these algorithms to dramatically improve the capabilities of small unmanned aircraft systems (UAS) to cooperatively locate and track ground targets. Our goal is to construct an electronic system, called GeoTrack, through which a network of hand-launched UAS use dedicated on-board processors to perform multi-sensor data fusion. The nominal sensors employed by the system will EO/IR video cameras on the UAS. When GMTI or other wide-area sensors are available, as in a layered sensing architecture, data from the standoff sensors will also be fused into the GeoTrack system. The output of the system will be position and orientation information on stationary or mobile targets in a global geo-stationary coordinate system. The design of the GeoTrack system requires significant advances beyond the current state-of-the-art in distributed control for a swarm of UAS to accomplish autonomous coordinated tracking; target geo-location using distributed sensor fusion by a network of UAS, communicating over an unreliable channel; and unsupervised real-time image-plane video tracking in low-powered computing platforms.
Kalman Orbit Optimized Loop Tracking
NASA Technical Reports Server (NTRS)
Young, Lawrence E.; Meehan, Thomas K.
2011-01-01
Under certain conditions of low signal power and/or high noise, there is insufficient signal to noise ratio (SNR) to close tracking loops with individual signals on orbiting Global Navigation Satellite System (GNSS) receivers. In addition, the processing power available from flight computers is not great enough to implement a conventional ultra-tight coupling tracking loop. This work provides a method to track GNSS signals at very low SNR without the penalty of requiring very high processor throughput to calculate the loop parameters. The Kalman Orbit-Optimized Loop (KOOL) tracking approach constitutes a filter with a dynamic model and using the aggregate of information from all tracked GNSS signals to close the tracking loop for each signal. For applications where there is not a good dynamic model, such as very low orbits where atmospheric drag models may not be adequate to achieve the required accuracy, aiding from an IMU (inertial measurement unit) or other sensor will be added. The KOOL approach is based on research JPL has done to allow signal recovery from weak and scintillating signals observed during the use of GPS signals for limb sounding of the Earth s atmosphere. That approach uses the onboard PVT (position, velocity, time) solution to generate predictions for the range, range rate, and acceleration of the low-SNR signal. The low- SNR signal data are captured by a directed open loop. KOOL builds on the previous open loop tracking by including feedback and observable generation from the weak-signal channels so that the MSR receiver will continue to track and provide PVT, range, and Doppler data, even when all channels have low SNR.
Chung, Hee-Jung; Song, Yoon Kyung; Hwang, Sang-Hyun; Lee, Do Hoon; Sugiura, Tetsuro
2018-02-25
Use of total laboratory automation (TLA) system has expanded to microbiology and hemostasis and upgraded to second and third generations. We herein report the first successful upgrades and fusion of different versions of the TLA system, thus improving laboratory turnaround time (TAT). A 21-day schedule was planned from the time of pre-meeting to installation and clinical sample application. We analyzed the monthly TAT in each menu, distribution of the "out of range for acceptable TAT" samples, and "prolonged time out of acceptable TAT," before and after the upgrade and fusion. We installed and customized hardware, middleware, and software. The one-way CliniLog 2.0 version track, 50.0-m long, was changed to a 23.2-m long one-way 2.0 version and an 18.7-m long two-way 4.0 version. The monthly TAT in the outpatient samples, before and after upgrading the TLA system, were uniformly satisfactory in the chemistry and viral marker menus. However, in the tumor marker menu, the target TAT (98.0% of samples ≤60 minutes) was not satisfied during the familiarization period. There was no significant difference in the proportion of "out of acceptable TAT" samples, before and after the TLA system upgrades (7.4‰ and 8.5‰). However, the mean "prolonged time out of acceptable TAT" in the chemistry samples was significantly shortened to 17.4 (±24.0) minutes after the fusion, from 34.5 (±43.4) minutes. Despite experimental challenges, a fusion of the TLA system shortened the "prolonged time out of acceptable TAT," indicating a distribution change in overall TAT. © 2018 Wiley Periodicals, Inc.
Plasma boundary shape control and real-time equilibrium reconstruction on NSTX-U
NASA Astrophysics Data System (ADS)
Boyer, M. D.; Battaglia, D. J.; Mueller, D.; Eidietis, N.; Erickson, K.; Ferron, J.; Gates, D. A.; Gerhardt, S.; Johnson, R.; Kolemen, E.; Menard, J.; Myers, C. E.; Sabbagh, S. A.; Scotti, F.; Vail, P.
2018-03-01
The upgrade to the National Spherical Torus eXperiment (NSTX-U) included two main improvements: a larger center-stack, enabling higher toroidal field and longer pulse duration, and the addition of three new tangentially aimed neutral beam sources, which increase available heating and current drive, and allow for flexibility in shaping power, torque, current, and particle deposition profiles. To best use these new capabilities and meet the high-performance operational goals of NSTX-U, major upgrades to the NSTX-U control system (NCS) hardware and software have been made. Several control algorithms, including those used for real-time equilibrium reconstruction and shape control, have been upgraded to improve and extend plasma control capabilities. As part of the commissioning phase of first plasma operations, the shape control system was tuned to control the boundary in both inner-wall limited and diverted discharges. It has been used to accurately track the requested evolution of the boundary (including the size of the inner gap between the plasma and central solenoid, which is a challenge for the ST configuration), X-point locations, and strike point locations, enabling repeatable discharge evolutions for scenario development and diagnostic commissioning.
NASA Astrophysics Data System (ADS)
Bisanz, T.; Große-Knetter, J.; Quadt, A.; Rieger, J.; Weingarten, J.
2017-08-01
The upgrade to the High Luminosity Large Hadron Collider will increase the instantaneous luminosity by more than a factor of 5, thus creating significant challenges to the tracking systems of all experiments. Recent advancement of active pixel detectors designed in CMOS processes provide attractive alternatives to the well-established hybrid design using passive sensors since they allow for smaller pixel sizes and cost effective production. This article presents studies of a high-voltage CMOS active pixel sensor designed for the ATLAS tracker upgrade. The sensor is glued to the read-out chip of the Insertable B-Layer, forming a capacitively coupled pixel detector. The pixel pitch of the device under test is 33× 125 μm2, while the pixels of the read-out chip have a pitch of 50× 250 μm2. Three pixels of the CMOS device are connected to one read-out pixel, the information of which of these subpixels is hit is encoded in the amplitude of the output signal (subpixel encoding). Test beam measurements are presented that demonstrate the usability of this subpixel encoding scheme.
NASA Astrophysics Data System (ADS)
Zha, Wangmei
2018-02-01
The Solenoidal Tracker at RHIC (STAR) experiment takes advantage of its excellent tracking and particle identification capabilities at mid-rapidity to explore the properties of strongly interacting QCD matter created in heavy-ion collisions at RHIC. The STAR collaboration presented 7 parallel and 2 plenary talks at Strangeness in Quark Matter 2017 and covered various topics including heavy flavor measurements, bulk observables, electro-magnetic probes and the upgrade program. This paper highlights some of the selected results.
HEC Applications on Columbia Project
NASA Technical Reports Server (NTRS)
Taft, Jim
2004-01-01
NASA's Columbia system consists of a cluster of twenty 512 processor SGI Altix systems. Each of these systems is 3 TFLOP/s in peak performance - approximately the same as the entire compute capability at NAS just one year ago. Each 512p system is a single system image machine with one Linunx O5, one high performance file system, and one globally shared memory. The NAS Terascale Applications Group (TAG) is chartered to assist in scaling NASA's mission critical codes to at least 512p in order to significantly improve emergency response during flight operations, as well as provide significant improvements in the codes. and rate of scientific discovery across the scientifc disciplines within NASA's Missions. Recent accomplishments are 4x improvements to codes in the ocean modeling community, 10x performance improvements in a number of computational fluid dynamics codes used in aero-vehicle design, and 5x improvements in a number of space science codes dealing in extreme physics. The TAG group will continue its scaling work to 2048p and beyond (10240 cpus) as the Columbia system becomes fully operational and the upgrades to the SGI NUMAlink memory fabric are in place. The NUMlink uprades dramatically improve system scalability for a single application. These upgrades will allow a number of codes to execute faster at higher fidelity than ever before on any other system, thus increasing the rate of scientific discovery even further
An empirical determination of the effects of sea state bias on Seasat altimetry
NASA Technical Reports Server (NTRS)
Born, G. H.; Richards, M. A.; Rosborough, G. W.
1982-01-01
A linear empirical model has been developed for the correction of sea state bias effects, in Seasat altimetry data altitude measurements, that are due to (1) electromagnetic bias caused by the fact that ocean wave troughs reflect the altimeter signal more strongly than the crests, shifting the apparent mean sea level toward the wave troughs, and (2) an independent instrument-related bias resulting from the inability of height corrections applied in the ground processor to compensate for simplifying assumptions made for the processor aboard Seasat. After applying appropriate corrections to the altimetry data, an empirical model for the sea state bias is obtained by differencing significant wave height and height measurements from coincident ground tracks. Height differences are minimized by solving for the coefficient of a linear relationship between height differences and wave height differences that minimize the height differences. In more than 50% of the 36 cases examined, 7% of the value of significant wave height should be subtracted for sea state bias correction.
NASA Astrophysics Data System (ADS)
Matter, John; Gnanvo, Kondo; Liyanage, Nilanga; Solid Collaboration; Moller Collaboration
2017-09-01
The JLab Parity Violation In Deep Inelastic Scattering (PVDIS) experiment will use the upgraded 12 GeV beam and proposed Solenoidal Large Intensity Device (SoLID) to measure the parity-violating electroweak asymmetry in DIS of polarized electrons with high precision in order to search for physics beyond the Standard Model. Unlike many prior Parity-Violating Electron Scattering (PVES) experiments, PVDIS is a single-particle tracking experiment. Furthermore the experiment's high luminosity combined with the SoLID spectrometer's open configuration creates high-background conditions. As such, the PVDIS experiment has the most demanding tracking detector needs of any PVES experiment to date, requiring precision detectors capable of operating at high-rate conditions in PVDIS's full production luminosity. Developments in large-area GEM detector R&D and SoLID simulations have demonstrated that GEMs provide a cost-effective solution for PVDIS's tracking needs. The integrating-detector-based JLab Measurement Of Lepton Lepton Electroweak Reaction (MOLLER) experiment requires high-precision tracking for acceptance calibration. Large-area GEMs will be used as tracking detectors for MOLLER as well. The conceptual designs of GEM detectors for the PVDIS and MOLLER experiments will be presented.
Optical attenuation mechanism upgrades, MOBLAS, and TLRS systems
NASA Technical Reports Server (NTRS)
Eichinger, Richard; Johnson, Toni; Malitson, Paul; Oldham, Thomas; Stewart, Loyal
1993-01-01
This poster presentation describes the Optical Attenuation Mechanism (OAM) Upgrades to the MOBLAS and TLRS Crustal Dynamics Satellite Laser Ranging (CDSLR) systems. The upgrades were for the purposes of preparing these systems to laser range to the TOPEX/POSEIDON spacecraft when it will be launched in the summer of 1992. The OAM permits the laser receiver to operate over the expected large signal dynamic range from TOPEX/POSEIDON and it reduces the number of pre- and post-calibrations for each satellite during multi-satellite tracking operations. It further simplifies the calibration bias corrections that had been made due to the pass-to-pass variation of the photomultiplier supply voltage and the transmit filter glass thickness. The upgrade incorporated improvements to the optical alignment capability of each CDSLR system through the addition of a CCD camera into the MOBLAS receive telescope and an alignment telescope onto the TLRS optical table. The OAM is stepper motor and microprocessor based; and the system can be controlled either manually by a control switch panel or computer controlled via an EIA RS-232C serial interface. The OAM has a neutral density (ND) range of 0.0 to 4.0 and the positioning is absolute referenced in steps of 0.1 ND. Both the fixed transmit filter and the daylight filter are solenoid actuated with digital inputs and outputs to and from the OAM microprocessor. During automated operation, the operator has the option to overide the remote control and control the OAM system via a local control switch panel.
TCS and peripheral robotization and upgrade on the ESO 1-meter telescope at La Silla Observatory
NASA Astrophysics Data System (ADS)
Ropert, S.; Suc, V.; Jordán, A.; Tala, M.; Liedtke, P.; Royo, S.
2016-07-01
In this work we describe the robotization and upgrade of the ESO 1m telescope located at La Silla Observatory. The ESO 1m telescope was the first telescope installed in La Silla, in 1966. It now hosts as a main instrument the FIber Dual EchellE Optical Spectrograph (FIDEOS), a high resolution spectrograph designed for precise Radial Velocity (RV) measurements on bright stars. In order to meet this project's requirements, the Telescope Control System (TCS) and some of its mechanical peripherals needed to be upgraded. The TCS was also upgraded into a modern and robust software running on a group of single board computers interacting together as a network with the CoolObs TCS developed by ObsTech. One of the particularities of the CoolObs TCS is that it allows to fuse the input signals of 2 encoders per axis in order to achieve high precision and resolution of the tracking with moderate cost encoders. One encoder is installed on axis at the telescope and the other on axis at the motor. The TCS was also integrated with the FIDEOS instrument system so that all the system can be controlled through the same remote user interface. Our modern TCS unit allows the user to run observations remotely through a secured internet web interface, minimizing the need of an on-site observer and opening a new age in robotic astronomy for the ESO 1m telescope.
Performance studies of resistive Micromegas chambers for the upgrade of the ATLAS Muon Spectrometer
NASA Astrophysics Data System (ADS)
Ntekas, Konstantinos
2018-02-01
The ATLAS collaboration at LHC has endorsed the resistive Micromegas technology (MM), along with the small-strip Thin Gap Chambers (sTGC), for the high luminosity upgrade of the first muon station in the high-rapidity region, the so called New Small Wheel (NSW) project. The NSW requires fully efficient MM chambers, up to a particle rate of ˜ 15 kHz/cm2, with spatial resolution better than 100 μm independent of the track incidence angle and the magnetic field (B ≤ 0.3 T). Along with the precise tracking the MM should be able to provide a trigger signal, complementary to the sTGC, thus a decent timing resolution is required. Several tests have been performed on small (10 × 10 cm2) MM chambers using medium (10 GeV/c) and high (150 GeV/c) momentum hadron beams at CERN. Results on the efficiency and position resolution measured during these tests are presented demonstrating the excellent characteristics of the MM that fulfil the NSW requirements. Exploiting the ability of the MM to work as a Time Projection Chamber a novel method, called the μTPC, has been developed for the case of inclined tracks, allowing for a precise segment reconstruction using a single detection plane. A detailed description of the method along with thorough studies towards refining the method's performance are shown. Finally, during 2014 the first MM quadruplet (MMSW) following the NSW design scheme, comprising four detection planes in a stereo readout configuration, has been realised at CERN. Test-beam results of this prototype are discussed and compared to theoretical expectations.
Performance verification of the CMS Phase-1 Upgrade Pixel detector
NASA Astrophysics Data System (ADS)
Veszpremi, V.
2017-12-01
The CMS tracker consists of two tracking systems utilizing semiconductor technology: the inner pixel and the outer strip detectors. The tracker detectors occupy the volume around the beam interaction region between 3 cm and 110 cm in radius and up to 280 cm along the beam axis. The pixel detector consists of 124 million pixels, corresponding to about 2 m 2 total area. It plays a vital role in the seeding of the track reconstruction algorithms and in the reconstruction of primary interactions and secondary decay vertices. It is surrounded by the strip tracker with 10 million read-out channels, corresponding to 200 m 2 total area. The tracker is operated in a high-occupancy and high-radiation environment established by particle collisions in the LHC . The current strip detector continues to perform very well. The pixel detector that has been used in Run 1 and in the first half of Run 2 was, however, replaced with the so-called Phase-1 Upgrade detector. The new system is better suited to match the increased instantaneous luminosity the LHC would reach before 2023. It was built to operate at an instantaneous luminosity of around 2×1034 cm-2s-1. The detector's new layout has an additional inner layer with respect to the previous one; it allows for more efficient tracking with smaller fake rate at higher event pile-up. The paper focuses on the first results obtained during the commissioning of the new detector. It also includes challenges faced during the first data taking to reach the optimal measurement efficiency. Details will be given on the performance at high occupancy with respect to observables such as data-rate, hit reconstruction efficiency, and resolution.
New high-precision drift-tube detectors for the ATLAS muon spectrometer
NASA Astrophysics Data System (ADS)
Kroha, H.; Fakhrutdinov, R.; Kozhin, A.
2017-06-01
Small-diameter muon drift tube (sMDT) detectors have been developed for upgrades of the ATLAS muon spectrometer. With a tube diameter of 15 mm, they provide an about an order of magnitude higher rate capability than the present ATLAS muon tracking detectors, the MDT chambers with 30 mm tube diameter. The drift-tube design and the construction methods have been optimised for mass production and allow for complex shapes required for maximising the acceptance. A record sense wire positioning accuracy of 5 μm has been achieved with the new design. In the serial production, the wire positioning accuracy is routinely better than 10 μm. 14 new sMDT chambers are already operational in ATLAS, further 16 are under construction for installation in the 2019-2020 LHC shutdown. For the upgrade of the barrel muon spectrometer for High-Luminosity LHC, 96 sMDT chambers will be contructed between 2020 and 2024.
Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)
NASA Technical Reports Server (NTRS)
Mesarch, Michael A.; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James
2007-01-01
This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF's orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.
Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)
NASA Technical Reports Server (NTRS)
Mesarch, Michael; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James
2007-01-01
This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF s orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.
Software Coherence in Multiprocessor Memory Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Bolosky, William Joseph
1993-01-01
Processors are becoming faster and multiprocessor memory interconnection systems are not keeping up. Therefore, it is necessary to have threads and the memory they access as near one another as possible. Typically, this involves putting memory or caches with the processors, which gives rise to the problem of coherence: if one processor writes an address, any other processor reading that address must see the new value. This coherence can be maintained by the hardware or with software intervention. Systems of both types have been built in the past; the hardware-based systems tended to outperform the software ones. However, the ratio of processor to interconnect speed is now so high that the extra overhead of the software systems may no longer be significant. This issue is explored both by implementing a software maintained system and by introducing and using the technique of offline optimal analysis of memory reference traces. It finds that in properly built systems, software maintained coherence can perform comparably to or even better than hardware maintained coherence. The architectural features necessary for efficient software coherence to be profitable include a small page size, a fast trap mechanism, and the ability to execute instructions while remote memory references are outstanding.
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
NASA Astrophysics Data System (ADS)
Malphrus, B. K.; Combs, M. S.; Kruth, J.
2001-12-01
Herein we report astronomical observations made with the NASA Advanced Data Acquisition System (ADAS). The NASA ADAS antenna, located at NASA Goddard Spaceflight Center's Wallops Flight Facility, Virginia, is an 18-meter X-band antenna system that has been primarily used for satellite tracking and served as the telecommunication station for the NASA IUE satellite until ca. 1997. A joint NASA-Morehead State University (MSU)-Kentucky NSF EPSCoR venture has been initiated to upgrade and relocate the antenna system to MSU's Astrophysics Laboratory where it will provide a research instrument and active laboratory for undergraduate students as well as be engaged in satellite tracking missions. As part of the relocation efforts, many systems will be upgraded including replacement of a hydrostatic azimuth bearing with a high-precision electromechanical bearing, a new servo system, and Ku-capable reflector surface. It is widely believed that there are still contributions that small aperture centimeter-wave instruments can make utilizing three primary observing strategies: 1.) longitudinal studies of RF variations in cosmic phenomena, 2.) surveys of large areas of sky, and 3.) fast reactions to transient phenomena. MSU faculty and staff along with NASA engineers re-outfitted the ADAS system with RF systems and upgraded servo controllers during the spring and summer of 2001. Empirical measurements of primary system performance characteristics were made including G/T (at S- and L bands), noise figures, pointing and tracking accuracies, and drive speeds and accelerations. Baseline astronomical observations were made with the MSU L-band receiver using a 6 MHz bandwidth centered at 1420 MHz (21-cm) and observing over a range of frequencies (up to 2.5 MHz, tunable over the 6 MHz window) with a 2048-channel back-end spectrometer, providing up to 1 KHz frequency resolution. Baseline observations of radio sources herein reported include Cygnus A, 3C 157, 3C 48 and the Andromeda Galaxy. After its transition to Morehead State University (which is expected to be completed in 2004), the 18-meter will be available for use by students and faculty from all U.S. institutions for astronomical observations. Transitioning of the 18-meter antenna is made possible by NASA, and the Kentucky NSF EPSCoR program and by grants from the U.S. Small Business Administration.
Non-radiation hardened microprocessors in space-based remote sensing systems
NASA Astrophysics Data System (ADS)
DeCoursey, R.; Melton, Ryan; Estes, Robert R., Jr.
2006-09-01
The CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) mission is a comprehensive suite of active and passive sensors including a 20Hz 230mj Nd:YAG lidar, a visible wavelength Earth-looking camera and an imaging infrared radiometer. CALIPSO flies in formation with the Earth Observing System Post-Meridian (EOS PM) train, provides continuous, near-simultaneous measurements and is a planned 3 year mission. CALIPSO was launched into a 98 degree sun synchronous Earth orbit in April of 2006 to study clouds and aerosols and acquires over 5 gigabytes of data every 24 hours. Figure 1 shows the ground track of one CALIPSO orbit as well as high and low intensity South Atlantic Anomaly outlines. CALIPSO passes through the SAA several times each day. Spaced based remote sensing systems that include multiple instruments and/or instruments such as lidar generate large volumes of data and require robust real-time hardware and software mechanisms and high throughput processors. Due to onboard storage restrictions and telemetry downlink limitations these systems must pre-process and reduce the data before sending it to the ground. This onboard processing and realtime requirement load may mean that newer more powerful processors are needed even though acceptable radiation-hardened versions have not yet been released. CALIPSO's single board computer payload controller processor is actually a set of four (4) voting non-radiation hardened COTS Power PC 603r's built on a single width VME card by General Dynamics Advanced Information Systems (GDAIS). Significant radiation concerns for CALIPSO and other Low Earth Orbit (LEO) satellites include the South Atlantic Anomaly (SAA), the north and south poles and strong solar events. Over much of South America and extending into the South Atlantic Ocean (see figure 1) the Van Allen radiation belts dip to just 200-800km and spacecraft entering this area are subjected to high energy protons and experience higher than normal Single Event Upset (SEU) and Single Event Latch-up (SEL) rates. Although less significant, spacecraft flying in the area around the poles experience similar upsets. Finally, powerful solar proton events in the range of 10MeV/10pfu to 100MeV/1pfu as are forecasted and tracked by NOAA's Space Environment Center in Colorado can result in SingleEvent Upset (SEU), Single Event Latch-up (SEL) and permanent failures such as Single Event Gate Rupture (SEGR) in some technologies. (Galactic Cosmic Rays (GCRs) are another source, especially for gate rupture) CALIPSO mitigates common radiation concerns in its data handling through the use of redundant processors, radiation-hardened Application Specific Integrated Circuits (ASIC), hardware-based Error Detection and Correction (EDAC), processor and memory scrubbing, redundant boot code and mirrored files. After presenting a system overview this paper will expand on each of these strategies. Where applicable, related on-orbit data collected since the CALIPSO initial boot on May 4, 2006 will be noted.
Non Radiation Hardened Microprocessors in Spaced Based Remote Sensing Systems
NASA Technical Reports Server (NTRS)
Decoursey, Robert J.; Estes, Robert F.; Melton, Ryan
2006-01-01
The CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) mission is a comprehensive suite of active and passive sensors including a 20Hz 230mj Nd:YAG lidar, a visible wavelength Earth-looking camera and an imaging infrared radiometer. CALIPSO flies in formation with the Earth Observing System Post-Meridian (EOS PM) train, provides continuous, near-simultaneous measurements and is a planned 3 year mission. CALIPSO was launched into a 98 degree sun synchronous Earth orbit in April of 2006 to study clouds and aerosols and acquires over 5 gigabytes of data every 24 hours. The ground track of one CALIPSO orbit as well as high and low intensity South Atlantic Anomaly outlines is shown. CALIPSO passes through the SAA several times each day. Spaced based remote sensing systems that include multiple instruments and/or instruments such as lidar generate large volumes of data and require robust real-time hardware and software mechanisms and high throughput processors. Due to onboard storage restrictions and telemetry downlink limitations these systems must pre-process and reduce the data before sending it to the ground. This onboard processing and realtime requirement load may mean that newer more powerful processors are needed even though acceptable radiation-hardened versions have not yet been released. CALIPSO's single board computer payload controller processor is actually a set of four (4) voting non-radiation hardened COTS Power PC 603r's built on a single width VME card by General Dynamics Advanced Information Systems (GDAIS). Significant radiation concerns for CALIPSO and other Low Earth Orbit (LEO) satellites include the South Atlantic Anomaly (SAA), the north and south poles and strong solar events. Over much of South America and extending into the South Atlantic Ocean the Van Allen radiation belts dip to just 200-800km and spacecraft entering this area are subjected to high energy protons and experience higher than normal Single Event Upset (SEU) and Single Event Latch-up (SEL) rates. Although less significant, spacecraft flying in the area around the poles experience similar upsets. Finally, powerful solar proton events in the range of 10MeV/10pfu to 100MeV/1pfu as are forecasted and tracked by NOAA's Space Environment Center in Colorado can result in Single Event Upset (SEU), Single Event Latch-up (SEL) and permanent failures such as Single Event Gate Rupture (SEGR) in some technologies. (Galactic Cosmic Rays (GCRs) are another source, especially for gate rupture) CALIPSO mitigates common radiation concerns in its data handling through the use of redundant processors, radiation-hardened Application Specific Integrated Circuits (ASIC), hardware-based Error Detection and Correction (EDAC), processor and memory scrubbing, redundant boot code and mirrored files. After presenting a system overview this paper will expand on each of these strategies. Where applicable, related on-orbit data collected since the CALIPSO initial boot on May 4, 2006 will be noted.
NASA Technical Reports Server (NTRS)
Snow, Walter L.; Childers, Brooks A.; Jones, Stephen B.; Fremaux, Charles M.
1993-01-01
A model space positioning system (MSPS), a state-of-the-art, real-time tracking system to provide the test engineer with on line model pitch and spin rate information, is described. It is noted that the six-degree-of-freedom post processor program will require additional programming effort both in the automated tracking mode for high spin rates and in accuracy to meet the measurement objectives. An independent multicamera system intended to augment the MSPS is studied using laboratory calibration methods based on photogrammetry to characterize the losses in various recording options. Data acquired to Super VHS tape encoded with Vertical Interval Time Code and transcribed to video disk are considered to be a reasonable priced choice for post editing and processing video data.
Langley Aircraft Landing Dynamics Facility
NASA Technical Reports Server (NTRS)
Davis, Pamela A.; Stubbs, Sandy M.; Tanner, John A.
1987-01-01
The Langley Research Center has recently upgraded the Landing Loads Track (LLT) to improve the capability of low-cost testing of conventional and advanced landing gear systems. The unique feature of the Langley Aircraft Landing Dynamics Facility (ALDF) is the ability to test aircraft landing gear systems on actual runway surfaces at operational ground speeds and loading conditions. A historical overview of the original LLT is given, followed by a detailed description of the new ALDF systems and operational capabilities.
Hernández, Diana; Phillips, Douglas
2015-07-01
Low-income households contend with high energy costs and poor thermal comfort due to poor structural conditions and energy inefficiencies in their homes. Energy efficiency upgrades can potentially reduce energy expenses and improve thermal comfort, while also addressing problematic issues in the home environment. The present mixed method pilot study explored the impacts of energy efficiency upgrades in 20 households in a low-income community in New York City. Surveys and interviews were administered to the heads of household in a variety of housing types. Interviews were also conducted with landlords of buildings that had recently undergone upgrades. Findings indicate that energy efficiency measures resulted in improved thermal comfort, enhanced health and safety and reduced energy costs. Participants reported largely positive experiences with the upgrades, resulting in direct and indirect benefits. However, results also indicate negative consequences associated with the upgrades and further illustrate that weatherization alone was insufficient to address all of the issues facing low-income households. Moreover, qualitative results revealed differing experiences of low-income renters compared to homeowners. Overall, energy efficiency upgrades are a promising intervention to mitigate the energy and structurally related challenges facing low-income households, but larger scale research is needed to capture the long-term implications of these upgrades.
Hernández, Diana; Phillips, Douglas
2016-01-01
Low-income households contend with high energy costs and poor thermal comfort due to poor structural conditions and energy inefficiencies in their homes. Energy efficiency upgrades can potentially reduce energy expenses and improve thermal comfort, while also addressing problematic issues in the home environment. The present mixed method pilot study explored the impacts of energy efficiency upgrades in 20 households in a low-income community in New York City. Surveys and interviews were administered to the heads of household in a variety of housing types. Interviews were also conducted with landlords of buildings that had recently undergone upgrades. Findings indicate that energy efficiency measures resulted in improved thermal comfort, enhanced health and safety and reduced energy costs. Participants reported largely positive experiences with the upgrades, resulting in direct and indirect benefits. However, results also indicate negative consequences associated with the upgrades and further illustrate that weatherization alone was insufficient to address all of the issues facing low-income households. Moreover, qualitative results revealed differing experiences of low-income renters compared to homeowners. Overall, energy efficiency upgrades are a promising intervention to mitigate the energy and structurally related challenges facing low-income households, but larger scale research is needed to capture the long-term implications of these upgrades. PMID:27054092
Contour Detector and Data Acquisition System for the Left Ventricular Outline
NASA Technical Reports Server (NTRS)
Reiber, J. H. C. (Inventor)
1978-01-01
A real-time contour detector and data acquisition system is described for an angiographic apparatus having a video scanner for converting an X-ray image of a structure characterized by a change in brightness level compared with its surrounding into video format and displaying the X-ray image in recurring video fields. The real-time contour detector and data acqusition system includes track and hold circuits; a reference level analog computer circuit; an analog compartor; a digital processor; a field memory; and a computer interface.
1980-05-16
Scott AFB, IL 62225 1 1842 EEG /EEIT, Scott AFB, IL 62225 1 1843 EES/EIELT, H-ickam AFB, H-I 96853 1 1844 EES/EIELT, Griffiss AFB, NY 13441 I HQ AFCC/DAPL...Time Control TDC Target Data Computer TO Technical Order TRACALS Traffic Cortrol and Landing Systems TSDA Transfer Switch Drawer Assembly TWT Traveling...the designated targets. The error detector outputs are fed to the TDC to update the beam position data during the next track interval. (b) Processor
NASA Technical Reports Server (NTRS)
Green, D. M.
1978-01-01
Software programs are described, one which implements a voltage regulation function, and one which implements a charger function with peak-power tracking of its input. The software, written in modular fashion, is intended as a vehicle for further experimentation with the P-3 system. A control teleprinter allows an operator to make parameter modifications to the control algorithm during experiments. The programs require 3K ROM and 2K ram each. User manuals for each system are included as well as a third program for simple I/O control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The ATLAS collaboration at LHC has chosen the Micromegas (Micro Mesh Gaseous Structure) technology along with the small-strip Thin Gap Chambers (sTGC) for the high luminosity upgrade of the inner muon station in the high-rapidity region, the so called New Small Wheel (NSW). It employs eight layers of Micromegas detectors and eight layers of sTGC. The NSW project requires fully efficient Micromegas chambers with spatial resolution down to 100 μm in the precision coordinate for momentum reconstruction, and at mm level in the azimuthal (second) coordinate, over a total active area of 1200 m{sup 2}, with a rate capability upmore » to about 15 kHz/cm{sup 2} and operation in a moderate magnetic field up to B = 0.4 T. The required tracking capability is provided by the intrinsic space resolution combined with a mechanical precision at the level of 30 μm along the precision coordinate. Together with the precise tracking capability the Micromegas chambers should provide a trigger signal. Several tests have been performed on small (10x10 cm{sup 2}) and large (1 x 1 m{sup 2}) size single gap chambers prototypes using high energy hadron beams at CERN, low and intermediate energy (0.5-5 GeV) electron beams at Frascati and DESY, neutron beams at Demokritos (Athens) and Garching (Munich) and cosmic rays. More recently two quadruplets with dimensions 1.2 x 0.5 m{sup 2} and the same configuration and structure foreseen for the NSW upgrade have been built at CERN and tested with high energy pions/muons beam. Results obtained in the most recent tests, in different configurations and operating conditions, in dependence with the magnetic field, will be presented, along with a comparison between different read-out electronics, either based on the APV25 chips, or based on a new digital front-end ASIC developed in its second version (VMM2) as a new prototype of the final chip that will be employed in the NSW upgrade. (authors)« less
Zaleski, Michael; Chen, Yun An; Chetlen, Alison L; Mack, Julie; Xu, Liyan; Dodge, Daleela G; Karamchandani, Dipti M
2018-05-11
The clinical decision to excise intraductal papilloma (IDP) without atypia diagnosed on biopsy remains controversial. We sought to establish clinical and histologic predictors (if any) which may predict upgrade in IDP. 296 biopsies (in 278 women) with histologic diagnosis of IDP without atypia were retrospectively identified and placed into Incidental (no corresponding imaging correlate), or Non-incidental (positive imaging correlate) groups. 253/296 (85.5%) cases were non-incidental, and 43/296 (14.5%) were incidental. 73.1% (185/253) non-incidental and 48.8% (21/43) incidental cases underwent excision. 12.4% (23/185) non-incidental cases underwent an upgrade to cancer or high-risk lesion; namely 8-Ductal carcinoma in situ (DCIS), 8-atypical ductal hyperplasia (ADH), 6-lobular neoplasia, and 1-flat epithelial atypia. There was no histopathologic feature on the biopsy in the non-incidental group which predicted upgrade; however a past history of atypia was significantly associated with upgrade. 2 of the 21 incidental cases upgraded (1 to ADH and 1 to lobular neoplasia); the former had a past history of ADH. Both incidental upgrades were >1 mm in size, and were not completely excised on the biopsy. None of the incidental cases which appeared completely excised on biopsy upgraded, irrespective of the size on biopsy. These findings suggest that all non-incidental IDPs should be considered candidates for surgical excision, given the 12.4% upgrade rate and no definitive histologic predictors of upgrade. Patients with incidental IDPs (if <1 mm, completely excised on biopsy and with no history of high risk breast lesion) can be spared excision. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
2014-01-01
Topics include: Data Fusion for Global Estimation of Forest Characteristics From Sparse Lidar Data; Debris and Ice Mapping Analysis Tool - Database; Data Acquisition and Processing Software - DAPS; Metal-Assisted Fabrication of Biodegradable Porous Silicon Nanostructures; Post-Growth, In Situ Adhesion of Carbon Nanotubes to a Substrate for Robust CNT Cathodes; Integrated PEMFC Flow Field Design for Gravity-Independent Passive Water Removal; Thermal Mechanical Preparation of Glass Spheres; Mechanistic-Based Multiaxial-Stochastic-Strength Model for Transversely-Isotropic Brittle Materials; Methods for Mitigating Space Radiation Effects, Fault Detection and Correction, and Processing Sensor Data; Compact Ka-Band Antenna Feed with Double Circularly Polarized Capability; Dual-Leadframe Transient Liquid Phase Bonded Power Semiconductor Module Assembly and Bonding Process; Quad First Stage Processor: A Four-Channel Digitizer and Digital Beam-Forming Processor; Protective Sleeve for a Pyrotechnic Reefing Line Cutter; Metabolic Heat Regenerated Temperature Swing Adsorption; CubeSat Deployable Log Periodic Dipole Array; Re-entry Vehicle Shape for Enhanced Performance; NanoRacks-Scale MEMS Gas Chromatograph System; Variable Camber Aerodynamic Control Surfaces and Active Wing Shaping Control; Spacecraft Line-of-Sight Stabilization Using LWIR Earth Signature; Technique for Finding Retro-Reflectors in Flash LIDAR Imagery; Novel Hemispherical Dynamic Camera for EVAs; 360 deg Visual Detection and Object Tracking on an Autonomous Surface Vehicle; Simulation of Charge Carrier Mobility in Conducting Polymers; Observational Data Formatter Using CMOR for CMIP5; Propellant Loading Physics Model for Fault Detection Isolation and Recovery; Probabilistic Guidance for Swarms of Autonomous Agents; Reducing Drift in Stereo Visual Odometry; Future Air-Traffic Management Concepts Evaluation Tool; Examination and A Priori Analysis of a Direct Numerical Simulation Database for High-Pressure Turbulent Flows; and Resource-Constrained Application of Support Vector Machines to Imagery.
Beam Stability R&D for the APS MBA Upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sereno, Nicholas S.; Arnold, Ned D.; Bui, Hanh D.
2015-01-01
Beam diagnostics required for the APS Multi-bend acromat (MBA) are driven by ambitious beam stability requirements. The major AC stability challenge is to correct rms beam motion to 10% the rms beam size at the insertion device source points from0.01 to 1000 Hz. The vertical plane represents the biggest challenge forAC stability, which is required to be 400 nm rms for a 4-micron vertical beam size. In addition to AC stability, long-term drift over a period of seven days is required to be 1 micron or less. Major diagnostics R&D components include improved rf beam position processing using commercially availablemore » FPGA-based BPM processors, new X-ray beam position monitors based on hard X-ray fluorescence from copper and Compton scattering off diamond, mechanical motion sensing to detect and correct long-term vacuum chamber drift, a new feedback system featuring a tenfold increase in sampling rate, and a several-fold increase in the number of fast correctors and BPMs in the feedback algorithm. Feedback system development represents a major effort, and we are pursuing development of a novel algorithm that integrates orbit correction for both slow and fast correctors down to DC simultaneously. Finally, a new data acquisition system (DAQ) is being developed to simultaneously acquire streaming data from all diagnostics as well as the feedback processors for commissioning and fault diagnosis. Results of studies and the design effort are reported.« less
Souza, W.R.
1999-01-01
This report documents a graphical display post-processor (SutraPlot) for the U.S. Geological Survey Saturated-Unsaturated flow and solute or energy TRAnsport simulation model SUTRA, Version 2D3D.1. This version of SutraPlot is an upgrade to SutraPlot for the 2D-only SUTRA model (Souza, 1987). It has been modified to add 3D functionality, a graphical user interface (GUI), and enhanced graphic output options. Graphical options for 2D SUTRA (2-dimension) simulations include: drawing the 2D finite-element mesh, mesh boundary, and velocity vectors; plots of contours for pressure, saturation, concentration, and temperature within the model region; 2D finite-element based gridding and interpolation; and 2D gridded data export files. Graphical options for 3D SUTRA (3-dimension) simulations include: drawing the 3D finite-element mesh; plots of contours for pressure, saturation, concentration, and temperature in 2D sections of the 3D model region; 3D finite-element based gridding and interpolation; drawing selected regions of velocity vectors (projected on principal coordinate planes); and 3D gridded data export files. Installation instructions and a description of all graphic options are presented. A sample SUTRA problem is described and three step-by-step SutraPlot applications are provided. In addition, the methodology and numerical algorithms for the 2D and 3D finite-element based gridding and interpolation, developed for SutraPlot, are described. 1
Roadmap for searching cosmic rays correlated with the extraterrestrial neutrinos seen at IceCube
NASA Astrophysics Data System (ADS)
Carpio, J. A.; Gago, A. M.
2017-06-01
We have built sky maps showing the expected arrival directions of 120 EeV ultrahigh-energy cosmic rays (UHECRs) directionally correlated with the latest astrophysical neutrino tracks observed at IceCube, including the four-year high-energy starting events (HESEs) and the two-year northern tracks, taken as point sources. We have considered contributions to UHECR deflections from the Galactic and the extragalactic magnetic field and a UHECR composition compatible with the current expectations. We have used the Jansson-Farrar JF12 model for the Galactic magnetic field and an extragalactic magnetic field strength of 1 nG and coherence length of 1 Mpc. We observe that the regions outside of the Galactic plane are more strongly correlated with the neutrino tracks than those adjacent to or in it, where IceCube HESE events 37 and 47 are good candidates to search for excesses, or anisotropies, in the UHECR flux. On the other hand, clustered northern tracks around (l ,b )=(0 ° ,-3 0 ° ) and (l ,b )=(-15 0 ° ,-3 0 ° ) are promising candidates for a stacked point source search. For example, we have focused on the region of UHECR arrival directions, at 150 EeV, correlated with IceCube HESE event 37 located at (l ,b )=(-137.1 ° ,65.8 ° ) in the northern hemisphere, far away from the Galactic plane, obtaining an angular size ˜5 ° , being ˜3 ° for 200 EeV and ˜8 ° for 120 EeV. We report a p value of 0.20 for a stacked point source search using current Auger and Telescope Array data, consistent with current results from both collaborations. Using Telescope Array data alone, we found a projected live time of 72 years to find correlations, but clearly this must improve with the planned Auger upgrade.
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor
2013-01-01
There has been considerable interest in developing and demonstrating a hybrid "polished panel" optical receiver concept that would replace the microwave panels on the Deep Space Network's (DSN) 34 meter antennas with highly polished aluminum panels, thus enabling simultaneous opticaland microwave reception. A test setup has been installed on the 34 meter research antenna at DSS-13 (Deep Space Station 13) at NASA's Goldstone Deep Space Communications Complex in California in order to assess the feasibility of this concept. Here we describe the results of a recent effort todramatically reduce the dimensions of the point-spread function (PSF) generated by a custom polished panel, thus enabling improved optical communications performance. The latest results are compared to the previous configuration in terms of quantifiable PSF improvement. In addition, the performance of acquisition and tracking algorithms designed specifically for the polished panel PSF are evaluated and compared, based on data obtained from real-time tracking of planets and bright stars with the 34 meter research antenna at DSS-13.
High Accuracy Ground-based near-Earth-asteroid Astrometry using Synthetic Tracking
NASA Astrophysics Data System (ADS)
Zhai, Chengxing; Shao, Michael; Saini, Navtej; Sandhu, Jagmit; Werne, Thomas; Choi, Philip; Ely, Todd A.; Jacobs, Chirstopher S.; Lazio, Joseph; Martin-Mur, Tomas J.; Owen, William M.; Preston, Robert; Turyshev, Slava; Michell, Adam; Nazli, Kutay; Cui, Isaac; Monchama, Rachel
2018-01-01
Accurate astrometry is crucial for determining the orbits of near-Earth-asteroids (NEAs). Further, the future of deep space high data rate communications is likely to be optical communications, such as the Deep Space Optical Communications package that is part of the baseline payload for the planned Psyche Discovery mission to the Psyche asteroid. We have recently upgraded our instrument on the Pomona College 1 m telescope, at JPL's Table Mountain Facility, for conducting synthetic tracking by taking many short exposure images. These images can be then combined in post-processing to track both asteroid and reference stars to yield accurate astrometry. Utilizing the precision of the current and future Gaia data releases, the JPL-Pomona College effort is now demonstrating precision astrometry on NEAs, which is likely to be of considerable value for cataloging NEAs. Further, treating NEAs as proxies of future spacecraft that carry optical communication lasers, our results serve as a measure of the astrometric accuracy that could be achieved for future plane-of-sky optical navigation.
High Accuracy Ground-based near-Earth-asteroid Astrometry using Synthetic Tracking
NASA Astrophysics Data System (ADS)
Zhai, C.; Shao, M.; Saini, N. S.; Sandhu, J. S.; Werne, T. A.; Choi, P.; Ely, T. A.; Jacobs, C.; Lazio, J.; Martin-Mur, T. J.; Owen, W. K.; Preston, R. A.; Turyshev, S. G.
2017-12-01
Accurate astrometry is crucial for determining the orbits of near-Earth-asteroids (NEAs). Further, the future of deep space high data rate communications is likely to be optical communications, such as the Deep Space Optical Communications package to be carried on the Psyche Discovery mission to the Psyche asteroid. We have recently upgraded our instrument on the Pomona College 1 m telescope, at JPL's Table Mountain Facility, for conducting synthetic tracking by taking many short exposure images. These images can be then combined in post-processing to track both asteroid and reference stars to yield accurate astrometry. Utilizing the precision of the current and future Gaia data releases, the JPL-Pomona College effort is now demonstrating precision astrometry on NEAs, which is likely to be of considerable value for cataloging NEAs. Further, treating NEAs as proxies of future spacecraft that carry optical communication lasers, our results serve as a measure of the astrometric accuracy that could be achieved for future plane-of-sky optical navigation.
NASA Technical Reports Server (NTRS)
Steinmetz, G. G.
1980-01-01
Using simulation, an improved longitudinal velocity vector control wheel steering mode and an improved electronic display format for an advanced flight system were developed and tested. Guidelines for the development phase were provided by test pilot critique summaries of the previous system. The results include performances from computer generated step column inputs across the full airplane speed and configuration envelope, as well as piloted performance results taken from a reference line tracking task and an approach to landing task conducted under various environmental conditions. The analysis of the results for the reference line tracking and approach to landing tasks indicates clearly detectable improvement in pilot tracking accuracy with a reduction in physical workload. The original objectives of upgrading the longitudinal axis of the velocity vector control wheel steering mode were successfully met when measured against the test pilot critique summaries and the original purpose outlined for this type of augment control mode.
Real Time Target Tracking Using Dedicated Vision Hardware
NASA Astrophysics Data System (ADS)
Kambies, Keith; Walsh, Peter
1988-03-01
This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.
A new variable-resolution associative memory for high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annovi, A.; Amerio, S.; Beretta, M.
2011-07-01
We describe an important advancement for the Associative Memory device (AM). The AM is a VLSI processor for pattern recognition based on Content Addressable Memory (CAM) architecture. The AM is optimized for on-line track finding in high-energy physics experiments. Pattern matching is carried out by finding track candidates in coarse resolution 'roads'. A large AM bank stores all trajectories of interest, called 'patterns', for a given detector resolution. The AM extracts roads compatible with a given event during detector read-out. Two important variables characterize the quality of the AM bank: its 'coverage' and the level of fake roads. The coverage,more » which describes the geometric efficiency of a bank, is defined as the fraction of tracks that match at least one pattern in the bank. Given a certain road size, the coverage of the bank can be increased just adding patterns to the bank, while the number of fakes unfortunately is roughly proportional to the number of patterns in the bank. Moreover, as the luminosity increases, the fake rate increases rapidly because of the increased silicon occupancy. To counter that, we must reduce the width of our roads. If we decrease the road width using the current technology, the system will become very large and extremely expensive. We propose an elegant solution to this problem: the 'variable resolution patterns'. Each pattern and each detector layer within a pattern will be able to use the optimal width, but we will use a 'don't care' feature (inspired from ternary CAMs) to increase the width when that is more appropriate. In other words we can use patterns of variable shape. As a result we reduce the number of fake roads, while keeping the efficiency high and avoiding excessive bank size due to the reduced width. We describe the idea, the implementation in the new AM design and the implementation of the algorithm in the simulation. Finally we show the effectiveness of the 'variable resolution patterns' idea using simulated high occupancy events in the ATLAS detector. (authors)« less
Using MaxCompiler for the high level synthesis of trigger algorithms
NASA Astrophysics Data System (ADS)
Summers, S.; Rose, A.; Sanders, P.
2017-02-01
Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.
The KLOE-2 Inner Tracker: Detector commissioning and operation
NASA Astrophysics Data System (ADS)
Balla, A.; Bencivenni, G.; Branchini, P.; Ciambrone, P.; Czerwinski, E.; De Lucia, E.; Cicco, A.; Di Domenici, D.; Felici, G.; Morello, G.
2017-02-01
The KLOE-2 experiment started its data taking campaign in November 2014 with an upgraded tracking system including an Inner Tracker built with the cylindrical GEM technology, to operate together with the Drift Chamber improving the apparatus tracking performance. The Inner Tracker is composed of four cylindrical triple-GEM, each provided with an X-V strips-pads stereo readout and equipped with the GASTONE ASIC developed inside the KLOE-2 collaboration. Although GEM detectors are already used in high energy physics experiment, this device is considered a frontier detector due to its cylindrical geometry: KLOE-2 is the first experiment to use this novel solution. The results of the detector commissioning, detection efficiency evaluation, calibration studies and alignment, both with dedicated cosmic-ray muon and Bhabha scattering events, will be reported.
Design of efficient and simple interface testing equipment for opto-electric tracking system
NASA Astrophysics Data System (ADS)
Liu, Qiong; Deng, Chao; Tian, Jing; Mao, Yao
2016-10-01
Interface testing for opto-electric tracking system is one important work to assure system running performance, aiming to verify the design result of every electronic interface matching the communication protocols or not, by different levels. Opto-electric tracking system nowadays is more complicated, composed of many functional units. Usually, interface testing is executed between units manufactured completely, highly depending on unit design and manufacture progress as well as relative people. As a result, it always takes days or weeks, inefficiently. To solve the problem, this paper promotes an efficient and simple interface testing equipment for opto-electric tracking system, consisting of optional interface circuit card, processor and test program. The hardware cards provide matched hardware interface(s), easily offered from hardware engineer. Automatic code generation technique is imported, providing adaption to new communication protocols. Automatic acquiring items, automatic constructing code architecture and automatic encoding are used to form a new program quickly with adaption. After simple steps, a standard customized new interface testing equipment with matching test program and interface(s) is ready for a waiting-test system in minutes. The efficient and simple interface testing equipment for opto-electric tracking system has worked for many opto-electric tracking system to test entire or part interfaces, reducing test time from days to hours, greatly improving test efficiency, with high software quality and stability, without manual coding. Used as a common tool, the efficient and simple interface testing equipment for opto-electric tracking system promoted by this paper has changed traditional interface testing method and created much higher efficiency.
Fu, Ling-Lin; Li, Jian-Rong
2014-01-01
The ability to trace fecal indicators and food-borne pathogens to the point of origin has major ramifications for food industry, food regulatory agencies, and public health. Such information would enable food producers and processors to better understand sources of contamination and thereby take corrective actions to prevent transmission. Microbial source tracking (MST), which currently is largely focused on determining sources of fecal contamination in waterways, is also providing the scientific community tools for tracking both fecal bacteria and food-borne pathogens contamination in the food chain. Approaches to MST are commonly classified as library-dependent methods (LDMs) or library-independent methods (LIMs). These tools will have widespread applications, including the use for regulatory compliance, pollution remediation, and risk assessment. These tools will reduce the incidence of illness associated with food and water. Our aim in this review is to highlight the use of molecular MST methods in application to understanding the source and transmission of food-borne pathogens. Moreover, the future directions of MST research are also discussed.
Optimal mapping of irregular finite element domains to parallel processors
NASA Technical Reports Server (NTRS)
Flower, J.; Otto, S.; Salama, M.
1987-01-01
Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.
NASA Astrophysics Data System (ADS)
Rumsewicz, Michael
1994-04-01
In this paper, we examine call completion performance, rather than message throughput, in a Common Channel Signaling network in which the processing resources, and not transmission resources, of a Signaling Transfer Point (STP) are overloaded. Specifically, we perform a transient analysis, via simulation, of a network consisting of a single Central Processor-based STP connecting many local exchanges. We consider the efficacy of using the Transfer Controlled (TFC) procedure when the network call attempt rate exceeds the processing capability of the STP. We find the following: (1) the success of the control depends critically on the rate at which TFC's are sent; (2) use of the TFC procedure in theevent of processor overload can provide reasonable call completion rates.
NASA Astrophysics Data System (ADS)
Shafqat, N.; Di Mitri, S.; Serpico, C.; Nicastro, S.
2017-09-01
The FERMI free-electron laser (FEL) of Elettra Sincrotrone Trieste, Italy, is a user facility driven by a 1.5 GeV 10-50 Hz S-band radiofrequency linear accelerator (linac), and it is based on an external laser seeding scheme that allows lasing at the shortest fundamental wavelength of 4 nm. An increase of the beam energy to 1.8 GeV at a tolerable breakdown rate, and an improvement of the final beam quality is desired in order to allow either lasing at 4 nm with a higher flux, or lasing at shorter wavelengths. This article presents the impedance analysis of newly designed S-band accelerating structures, for replacement of the existing backward travelling wave structures (BTWS) in the last portion of the FERMI linac. The new structure design promises higher accelerating gradient and lower impedance than those of the existing BTWS. Particle tracking simulations show that, with the linac upgrade, the beam relative energy spread, its linear and nonlinear z-correlation internal to the bunch, and the beam transverse emittances can be made smaller than the ones in the present configuration, with expected advantage to the FEL performance. The repercussion of the upgrade on the linac quadrupole magnets setting, for a pre-determined electron beam optics, is also considered.
Status, upgrades, and advances of RTS2: the open source astronomical observatory manager
NASA Astrophysics Data System (ADS)
Kubánek, Petr
2016-07-01
RTS2 is an open source observatory control system. Being developed from early 2000, it continue to receive new features in last two years. RTS2 is a modulat, network-based distributed control system, featuring telescope drivers with advanced tracking and pointing capabilities, fast camera drivers and high level modules for "business logic" of the observatory, connected to a SQL database. Running on all continents of the planet, it accumulated a lot to control parts or full observatory setups.
The tracking, calorimeter and muon detectors of the H1 experiment at HERA
NASA Astrophysics Data System (ADS)
Abt, I.; Ahmed, T.; Aid, S.; Andreev, V.; Andrieu, B.; Appuhn, R.-D.; Arnault, C.; Arpagaus, M.; Babaev, A.; Bärwolff, H.; Bán, J.; Banas, E.; Baranov, P.; Barrelet, E.; Bartel, W.; Barth, M.; Bassler, U.; Basti, F.; Baynham, D. E.; Baze, J.-M.; Beck, G. A.; Beck, H. P.; Bederede, D.; Behrend, H.-J.; Beigbeder, C.; Belousov, A.; Berger, Ch.; Bergstein, H.; Bernard, R.; Bernardi, G.; Bernet, R.; Bernier, R.; Berthon, U.; Bertrand-Coremans, G.; Besançon, M.; Beyer, R.; Biasci, J.-C.; Biddulph, P.; Bidoli, V.; Binder, E.; Binko, P.; Bizot, J.-C.; Blobel, V.; Blouzon, F.; Blume, H.; Borras, K.; Boudry, V.; Bourdarios, C.; Brasse, F.; Braunschweig, W.; Breton, D.; Brettel, H.; Brisson, V.; Bruncko, D.; Brune, C.; Buchner, U.; Büngener, L.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Burmeister, P.; Busata, A.; Buschhorn, G.; Campbell, A. J.; Carli, T.; Charles, F.; Charlet, M.; Chase, R.; Clarke, D.; Clegg, A. B.; Colombo, M.; Commichau, V.; Connolly, J. F.; Cornett, U.; Coughlan, J. A.; Courau, A.; Cousinou, M.-C.; Coutures, Ch.; Coville, A.; Cozzika, G.; Cragg, D. A.; Criegee, L.; Cronström, H. I.; Cunliffe, N. H.; Cvach, J.; Cyz, A.; Dagoret, S.; Dainton, J. B.; Danilov, M.; Dann, A. W. E.; Darvill, D.; Dau, W. D.; David, J.; David, M.; Day, R. J.; Deffur, E.; Delcourt, B.; Del Buono, L.; Descamps, F.; Devel, M.; Dewulf, J. P.; De Roeck, A.; Dingus, P.; Djiki, K.; Dollfus, C.; Dowell, J. D.; Dreis, H. B.; Drescher, A.; Dretzler, U.; Duboc, J.; Ducorps, A.; Düllmann, D.; Dünger, O.; Duhm, H.; Dulny, B.; Dupont, F.; Ebbinghaus, R.; Eberle, M.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Edwards, B. W. H.; Efremenko, V.; Egli, S.; Eichenberger, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Ellis, N. N.; Ellison, R. J.; Elsen, E.; Epifantsev, A.; Erdmann, M.; Erdmann, W.; Ernst, G.; Evrard, E.; Falley, G.; Favart, L.; Fedotov, A.; Feeken, D.; Felst, R.; Feltesse, J.; Feng, Z. Y.; Fensome, I. F.; Fent, J.; Ferencei, J.; Ferrarotto, F.; Finke, K.; Flamm, K.; Flauger, W.; Fleischer, M.; Flieser, M.; Flower, P. S.; Flügge, G.; Fomenko, A.; Fominykh, B.; Forbush, M.; Formánek, J.; Foster, J. M.; Franke, G.; Fretwurst, E.; Fröchtenicht, W.; Fuhrmann, P.; Gabathuler, E.; Gabathuler, K.; Gadow, K.; Gamerdinger, K.; Garvey, J.; Gayler, J.; Gažo, E.; Gellrich, A.; Gennis, M.; Gensch, U.; Genzel, H.; Gerhards, R.; Geske, K.; Giesgen, I.; Gillespie, D.; Glasgow, W.; Godfrey, L.; Godlewski, J.; Goerlach, U.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Goodall, A. M.; Gorelov, I.; Goritchev, P.; Gosset, L.; Grab, C.; Grässler, H.; Grässler, R.; Greenshaw, T.; Gregory, C.; Greif, H.; Grewe, M.; Grindhammer, G.; Gruber, A.; Gruber, C.; Günther, S.; Haack, J.; Haguenauer, M.; Haidt, D.; Hajduk, L.; Hammer, D.; Hamon, O.; Hampel, M.; Handschuh, D.; Hangarter, K.; Hanlon, E. M.; Hapke, M.; Harder, U.; Harjes, J.; Hartz, P.; Hatton, P. E.; Haydar, R.; Haynes, W. J.; Heatherington, J.; Hedberg, V.; Hedgecock, C. R.; Heinzelmann, G.; Henderson, R. C. W.; Henschel, H.; Herma, R.; Herynek, I.; Hildesheim, W.; Hill, P.; Hill, D. L.; Hilton, C. D.; Hladký, J.; Hoeger, K. C.; Hopes, R. B.; Horisberger, R.; Hrisoho, A.; Huber, J.; Huet, Ph.; Hufnagel, H.; Huot, N.; Huppert, J.-F.; Ibbotson, M.; Imbault, D.; Itterbeck, H.; Jabiol, M.-A.; Jacholkowska, A.; Jacobsson, C.; Jaffré, M.; Janoth, J.; Jansen, T.; Jean, P.; Jeanjean, J.; Jönsson, L.; Johannsen, K.; Johnson, D. P.; Johnson, L.; Jovanovic, P.; Jung, H.; Kalmus, P. I. P.; Kant, D.; Kant, D.; Kantel, G.; Karstensen, S.; Kasarian, S.; Kaschowitz, R.; Kasselmann, P.; Kathage, U.; Kaufmann, H. H.; Kemmerling, G.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Ko, W.; Kobler, T.; Koch, J.; Köhler, T.; Köhne, J.; Kolander, M.; Kolanoski, H.; Kole, F.; Koll, J.; Kolya, S. D.; Koppitz, B.; Korbel, V.; Korn, M.; Kostka, P.; Kotelnikov, S. K.; Krasny, M. W.; Krehbiel, H.; Krivan, F.; Krücker, D.; Krüger, U.; Krüner-Marquis, U.; Kubantsev, M.; Kubenka, J. P.; Külper, T.; Küsel, H.-J.; Küster, H.; Kuhlen, M.; Kurča, T.; Kurzhöfer, J.; Kuznik, B.; Laforge, B.; Lamarche, F.; Lander, R.; Landon, M. P. J.; Lange, W.; Lange, W.; Langkau, R.; Lanius, P.; Laporte, J.-F.; Laptin, L.; Laskus, H.; Lebedev, A.; Lemler, M.; Lenhardt, U.; Leuschner, A.; Leverenz, C.; Levonian, S.; Lewin, D.; Ley, Ch.; Lindner, A.; Lindström, G.; Linsel, F.; Lipinski, J.; Liss, B.; Loch, P.; Lodge, A. B.; Lohmander, H.; Lopez, G. C.; Lottin, J.-P.; Lubimov, V.; Ludwig, K.; Lüers, D.; Lugetski, N.; Lundberg, B.; Maeshima, K.; Magnussen, N.; Malinovski, E.; Mani, S.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, F.; Martin, G.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Masbender, V.; Masson, S.; Mavroidis, A.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Meissner, J.; Mercer, D.; Merz, T.; Meyer, C. A.; Meyer, H.; Meyer, J.; Mikocki, S.; Mills, J. L.; Milone, V.; Möck, J.; Monnier, E.; Montés, B.; Moreau, F.; Moreels, J.; Morgan, B.; Morris, J. V.; Morton, J. M.; Müller, K.; Murín, P.; Murray, S. A.; Nagovizin, V.; Naroska, B.; Naumann, Th.; Nayman, P.; Nepeipivo, A.; Newman, P.; Newman-Coburn, D.; Newton, D.; Neyret, D.; Nguyen, H. K.; Niebergall, F.; Niebuhr, C.; Nisius, R.; Novák, T.; Nováková, H.; Nowak, G.; Noyes, G. W.; Nyberg, M.; Oberlack, H.; Obrock, U.; Olsson, J. E.; Olszowska, J.; Orenstein, S.; Ould-Saada, F.; Pailler, P.; Palanque, S.; Panaro, E.; Panitch, A.; Parey, J.-Y.; Pascaud, C.; Patel, G. D.; Patoux, A.; Paulot, C.; Pein, U.; Peppel, E.; Perez, E.; Perrodo, P.; Perus, A.; Peters, S.; Pharabod, J.-P.; Phillips, H. T.; Phillips, J. P.; Pichler, Ch.; Pieuchot, A.; Pimpl, W.; Pitzl, D.; Porrovecchio, A.; Prell, S.; Prosi, R.; Quehl, H.; Rädel, G.; Raupach, F.; Rauschnabel, K.; Reboux, A.; Reimer, P.; Reinmuth, G.; Reinshagen, S.; Ribarics, P.; Riech, V.; Riedlberger, J.; Riege, H.; Riess, S.; Rietz, M.; Robertson, S. M.; Robmann, P.; Röpnack, P.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Royon, C.; Rudge, A.; Rüter, K.; Rudowicz, M.; Ruffer, M.; Rusakov, S.; Rusinov, V.; Rybicki, K.; Sacton, J.; Sahlmann, N.; Sanchez, E.; Sankey, D. P. C.; Savitski, M.; Schacht, P.; Schiek, S.; Schirm, N.; Schleif, S.; Schleper, P.; von Schlippe, W.; Schmidt, C.; Schmidt, D.; Schmidt, G.; Schmitz, W.; Schmücker, H.; Schröder, V.; Schütt, J.; Schuhmann, E.; Schulz, M.; Schwind, A.; Scobel, W.; Seehausen, U.; Sefkow, F.; Sell, R.; Seman, M.; Semenov, A.; Shatalov, P.; Shekelyan, V.; Sheviakov, I.; Shooshtari, H.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Sirous, A.; Skillicorn, I. O.; Škvařil, P.; Smirnov, P.; Smith, J. R.; Smolik, L.; Sole, D.; Soloviev, Y.; Špalek, J.; Spitzer, H.; von Staa, R.; Staeck, J.; Staroba, P.; Šťastný, J.; Steenbock, M.; Štefan, P.; Steffen, P.; Steinberg, R.; Steiner, H.; Stella, B.; Stephens, K.; Stier, J.; Stiewe, J.; Stösslein, U.; Strachota, J.; Straumann, U.; Strowbridge, A.; Struczinski, W.; Sutton, J. P.; Szkutnik, Z.; Tappern, G.; Tapprogge, S.; Taylor, R. E.; Tchernyshov, V.; Tchudakov, V.; Thiebaux, C.; Thiele, K.; Thompson, G.; Thompson, R. J.; Tichomirov, I.; Trenkel, C.; Tribanek, W.; Tröger, K.; Truöl, P.; Turiot, M.; Turnau, J.; Tutas, J.; Urban, L.; Urban, M.; Usik, A.; Valkár, Š.; Valkárová, A.; Vallée, C.; Van Beek, G.; Vanderkelen, M.; Van Lancker, L.; Van Mechelen, P.; Vartapetian, A.; Vazdik, Y.; Vecko, M.; Verrecchia, P.; Vick, R.; Villet, G.; Vogel, E.; Wacker, K.; Wagener, M.; Walker, I. W.; Walther, A.; Weber, G.; Wegener, D.; Wegner, A.; Weissbach, P.; Wellisch, H. P.; West, L.; White, D.; Willard, S.; Winde, M.; Winter, G.-G.; Wolff, Th.; Womersley, L. A.; Wright, A. E.; Wünsch, E.; Wulff, N.; Wyborn, B. E.; Yiou, T. P.; Žáček, J.; Zarbock, D.; Závada, P.; Zeitnitz, C.; Zhang, Z.; Ziaeepour, H.; Zimmer, M.; Zimmermann, W.; Zomer, F.; Zuber, K.; H1 Collaboration
1997-02-01
Technical aspects of the three major components of the H1 detector at the electron-proton storage ring HERA are described. This paper covers the detector status up to the end of 1994 when a major upgrading of some of its elements was undertaken. A description of the other elements of the detector and some performance figures from luminosity runs at HERA during 1993 and 1994 are given in a paper previously published in this journal.
Adaptive and accelerated tracking-learning-detection
NASA Astrophysics Data System (ADS)
Guo, Pengyu; Li, Xin; Ding, Shaowen; Tian, Zunhua; Zhang, Xiaohu
2013-08-01
An improved online long-term visual tracking algorithm, named adaptive and accelerated TLD (AA-TLD) based on Tracking-Learning-Detection (TLD) which is a novel tracking framework has been introduced in this paper. The improvement focuses on two aspects, one is adaption, which makes the algorithm not dependent on the pre-defined scanning grids by online generating scale space, and the other is efficiency, which uses not only algorithm-level acceleration like scale prediction that employs auto-regression and moving average (ARMA) model to learn the object motion to lessen the detector's searching range and the fixed number of positive and negative samples that ensures a constant retrieving time, but also CPU and GPU parallel technology to achieve hardware acceleration. In addition, in order to obtain a better effect, some TLD's details are redesigned, which uses a weight including both normalized correlation coefficient and scale size to integrate results, and adjusts distance metric thresholds online. A contrastive experiment on success rate, center location error and execution time, is carried out to show a performance and efficiency upgrade over state-of-the-art TLD with partial TLD datasets and Shenzhou IX return capsule image sequences. The algorithm can be used in the field of video surveillance to meet the need of real-time video tracking.
An Early Quantum Computing Proposal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Stephen Russell; Alexander, Francis Joseph; Barros, Kipton Marcos
The D-Wave 2X is the third generation of quantum processing created by D-Wave. NASA (with Google and USRA) and Lockheed Martin (with USC), both own D-Wave systems. Los Alamos National Laboratory (LANL) purchased a D-Wave 2X in November 2015. The D-Wave 2X processor contains (nominally) 1152 quantum bits (or qubits) and is designed to specifically perform quantum annealing, which is a well-known method for finding a global minimum of an optimization problem. This methodology is based on direct execution of a quantum evolution in experimental quantum hardware. While this can be a powerful method for solving particular kinds of problems,more » it also means that the D-Wave 2X processor is not a general computing processor and cannot be programmed to perform a wide variety of tasks. It is a highly specialized processor, well beyond what NNSA currently thinks of as an “advanced architecture.”A D-Wave is best described as a quantum optimizer. That is, it uses quantum superposition to find the lowest energy state of a system by repeated doses of power and settling stages. The D-Wave produces multiple solutions to any suitably formulated problem, one of which is the lowest energy state solution (global minimum). Mapping problems onto the D-Wave requires defining an objective function to be minimized and then encoding that function in the Hamiltonian of the D-Wave system. The quantum annealing method is then used to find the lowest energy configuration of the Hamiltonian using the current D-Wave Two, two-level, quantum processor. This is not always an easy thing to do, and the D-Wave Two has significant limitations that restrict problem sizes that can be run and algorithmic choices that can be made. Furthermore, as more people are exploring this technology, it has become clear that it is very difficult to come up with general approaches to optimization that can both utilize the D-Wave and that can do better than highly developed algorithms on conventional computers for specific applications. These are all fundamental challenges that must be overcome for the D-Wave, or similar, quantum computing technology to be broadly applicable.« less
tkLayout: a design tool for innovative silicon tracking detectors
NASA Astrophysics Data System (ADS)
Bianchi, G.
2014-03-01
A new CMS tracker is scheduled to become operational for the LHC Phase 2 upgrade in the early 2020's. tkLayout is a software package developed to create 3d models for the design of the CMS tracker and to evaluate its fundamental performance figures. The new tracker will have to cope with much higher luminosity conditions, resulting in increased track density, harsher radiation exposure and, especially, much higher data acquisition bandwidth, such that equipping the tracker with triggering capabilities is envisaged. The design of an innovative detector involves deciding on an architecture offering the best trade-off among many figures of merit, such as tracking resolution, power dissipation, bandwidth, cost and so on. Quantitatively evaluating these figures of merit as early as possible in the design phase is of capital importance and it is best done with the aid of software models. tkLayout is a flexible modeling tool: new performance estimates and support for different detector geometries can be quickly added, thanks to its modular structure. Besides, the software executes very quickly (about two minutes), so that many possible architectural variations can be rapidly modeled and compared, to help in the choice of a viable detector layout and then to optimize it. A tracker geometry is generated from simple configuration files, defining the module types, layout and materials. Support structures are automatically added and services routed to provide a realistic tracker description. The tracker geometries thus generated can be exported to the standard CMS simulation framework (CMSSW) for full Monte Carlo studies. tkLayout has proven essential in giving guidance to CMS in studying different detector layouts and exploring the feasibility of innovative solutions for tracking detectors, in terms of design, performance and projected costs. This tool has been one of the keys to making important design decisions for over five years now and has also enabled project engineers and simulation experts to focus their efforts on other important or specific issues. Even if tkLayout was designed for the CMS tracker upgrade project, its flexibility makes it experiment-agnostic, so that it could be easily adapted to model other tracking detectors. The technology behind tkLayout is presented, as well as some of the results obtained in the context of the CMS silicon tracker design studies.
Research in software allocation for advanced manned mission communications and tracking systems
NASA Technical Reports Server (NTRS)
Warnagiris, Tom; Wolff, Bill; Kusmanoff, Antone
1990-01-01
An assessment of the planned processing hardware and software/firmware for the Communications and Tracking System of the Space Station Freedom (SSF) was performed. The intent of the assessment was to determine the optimum distribution of software/firmware in the processing hardware for maximum throughput with minimum required memory. As a product of the assessment process an assessment methodology was to be developed that could be used for similar assessments of future manned spacecraft system designs. The assessment process was hampered by changing requirements for the Space Station. As a result, the initial objective of determining the optimum software/firmware allocation was not fulfilled, but several useful conclusions and recommendations resulted from the assessment. It was concluded that the assessment process would not be completely successful for a system with changing requirements. It was also concluded that memory requirements and hardware requirements were being modified to fit as a consequence of the change process, and although throughput could not be quantitized, potential problem areas could be identified. Finally, inherent flexibility of the system design was essential for the success of a system design with changing requirements. Recommendations resulting from the assessment included development of common software for some embedded controller functions, reduction of embedded processor requirements by hardwiring some Orbital Replacement Units (ORUs) to make better use of processor capabilities, and improvement in communications between software development personnel to enhance the integration process. Lastly, a critical observation was made regarding the software integration tasks did not appear to be addressed in the design process to the degree necessary for successful satisfaction of the system requirements.
Servo Platform Circuit Design of Pendulous Gyroscope Based on DSP
NASA Astrophysics Data System (ADS)
Tan, Lilong; Wang, Pengcheng; Zhong, Qiyuan; Zhang, Cui; Liu, Yunfei
2018-03-01
In order to solve the problem when a certain type of pendulous gyroscope in the initial installation deviation more than 40 degrees, that the servo platform can not be up to the speed of the gyroscope in the rough north seeking phase. This paper takes the digital signal processor TMS320F28027 as the core, uses incremental digital PID algorithm, carries out the circuit design of the servo platform. Firstly, the hardware circuit is divided into three parts: DSP minimum system, motor driving circuit and signal processing circuit, then the mathematical model of incremental digital PID algorithm is established, based on the model, writes the PID control program in CCS3.3, finally, the servo motor tracking control experiment is carried out, it shows that the design can significantly improve the tracking ability of the servo platform, and the design has good engineering practice.
Automated absolute phase retrieval in across-track interferometry
NASA Technical Reports Server (NTRS)
Madsen, Soren N.; Zebker, Howard A.
1992-01-01
Discussed is a key element in the processing of topographic radar maps acquired by the NASA/JPL airborne synthetic aperture radar configured as an across-track interferometer (TOPSAR). TOPSAR utilizes a single transmit and two receive antennas; the three-dimensional target location is determined by triangulation based on a known baseline and two measured slant ranges. The slant range difference is determined very accurately from the phase difference between the signals received by the two antennas. This phase is measured modulo 2pi, whereas it is the absolute phase which relates directly to the difference in slant range. It is shown that splitting the range bandwidth into two subbands in the processor and processing each individually allows for the absolute phase. The underlying principles and system errors which must be considered are discussed, together with the implementation and results from processing data acquired during the summer of 1991.
The path toward HEP High Performance Computing
NASA Astrophysics Data System (ADS)
Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
2014-06-01
High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.
Two-dimensional acousto-optic processor using circular antenna array with a Butler matrix
NASA Astrophysics Data System (ADS)
Lee, Jim P.
1992-09-01
A two-dimensional acousto-optic signal processor is shown to be useful for providing simultaneous spectrum analysis and direction finding of radar signals over an instantaneous field of view of 360 deg. A system analysis with emphasis on the direction-finding aspect of this new architecture is presented. The peak location of the optical pattern provides a direct measure of bearing, independent of signal frequency. In addition, the sidelobe levels of the pattern can be effectively reduced using amplitude weighting. Performance parameters, such as mainlobe beamwidth, peak-sidelobe level, and pointing error, are analyzed as a function of the Gaussian laser illumination profile and the number of channels. Finally, a comparison with a linear antenna array architecture is also discussed.
A Upgrade of the Aeroheating Software "MINIVER"
NASA Technical Reports Server (NTRS)
Louderback, Pierce
2013-01-01
Many software packages assist engineers with performing flight vehicle analysis, but some of these packages have gone many years without updates or significant improvements to their workflows. One such software package, known as MINIVER, is a powerful yet lightweight tool used for aeroheating analyses. However, it is an aging program that has not seen major improvements within the past decade. As part of a collaborative effort with the Florida Institute of Technology, MINIVER has received a major user interface overhaul, a change in program language, and will be continually receiving updates to improve its capabilities. The user interface update includes a migration from a command-line interface to that of a graphical user interface supported in the Windows operating system. The organizational structure of the pre-processor has been transformed to clearly defined categories to provide ease of data entry. Helpful tools have been incorporated, including the ability to copy sections of cases as well as a generalized importer which aids in bulk data entry. A visual trajectory editor has been included, as well as a CAD Editor which allows the user to input simplified geometries in order to generate MINIVER cases in bulk. To demonstrate its continued effectiveness, a case involving the JAXA OREX flight vehicle will be included, providing comparisons to captured flight data as well as other computational solutions. The most recent upgrade effort incorporated the use of the CAD Editor, and current efforts are investigating methods to link MINIVER projects with SINDA/Fluint and Thermal Desktop.
Parallel text rendering by a PostScript interpreter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kritskii, S.P.; Zastavnoi, B.A.
1994-11-01
The most radical method of increasing the performance of devices controlled by PostScript interpreters may be the use of multiprocessor controllers. This paper presents a method for parallelizing the operation of a PostScript interpreter for rendering text. The proposed method is based on decomposition of the outlines of letters into horizontal strips covering equal areas. The subroutines thus obtained are distributed to the processors in a network and then filled in by conventional sequential algorithms. A special algorithm has been developed for dividing the outlines of characters into subroutines so that each may be colored independently of the others. Themore » algorithm uses special estimates for estimating the correct partition so that the corresponding outlines are divided into horizontal strips. A method is presented for finding such estimates. Two different processing approaches are presented. In the first, one of the processors performs the decomposition of the outlines and distributes the strips to the remaining processors, which are responsible for the rendering. In the second approach, the decomposition process is itself distributed among the processors in the network.« less
NASA Astrophysics Data System (ADS)
Senyukov, S.; Baudot, J.; Besson, A.; Claus, G.; Cousin, L.; Dorokhov, A.; Dulinski, W.; Goffe, M.; Hu-Guo, C.; Winter, M.
2013-12-01
The apparatus of the ALICE experiment at CERN will be upgraded in 2017/18 during the second long shutdown of the LHC (LS2). A major motivation for this upgrade is to extend the physics reach for charmed and beauty particles down to low transverse momenta. This requires a substantial improvement of the spatial resolution and the data rate capability of the ALICE Inner Tracking System (ITS). To achieve this goal, the new ITS will be equipped with 50 μm thin CMOS Pixel Sensors (CPS) covering either the three innermost layers or all the 7 layers of the detector. The CPS being developed for the ITS upgrade at IPHC (Strasbourg) is derived from the MIMOSA 28 sensor realised for the STAR-PXL at RHIC in a 0.35 μm CMOS process. In order to satisfy the ITS upgrade requirements in terms of readout speed and radiation tolerance, a CMOS process with a reduced feature size and a high resistivity epitaxial layer should be exploited. In this respect, the charged particle detection performance and radiation hardness of the TowerJazz 0.18 μm CMOS process were studied with the help of the first prototype chip MIMOSA 32. The beam tests performed with negative pions of 120 GeV/c at the CERN-SPS allowed to measure a signal-to-noise ratio (SNR) for the non-irradiated chip in the range between 22 and 32 depending on the pixel design. The chip irradiated with the combined dose of 1 MRad and 1013neq /cm2 was observed to yield an SNR ranging between 11 and 23 for coolant temperatures varying from 15 °C to 30 °C. These SNR values were measured to result in particle detection efficiencies above 99.5% and 98% before and after irradiation, respectively. These satisfactory results allow to validate the TowerJazz 0.18 μm CMOS process for the ALICE ITS upgrade.
Design and performance of the SLD vertex detector: a 307 Mpixel tracking system
NASA Astrophysics Data System (ADS)
Abe, K.; Arodzero, A.; Baltay, C.; Brau, J. E.; Breidenbach, M.; Burrows, P. N.; Chou, A. S.; Crawford, G.; Damerell, C. J. S.; Dervan, P. J.; Dong, D. N.; Emmet, W.; English, R. L.; Etzion, E.; Foss, M.; Frey, R.; Haller, G.; Hasuko, K.; Hertzbach, S. S.; Hoeflich, J.; Huffer, M. E.; Jackson, D. J.; Jaros, J. A.; Kelsey, J.; Lee, I.; Lia, V.; Lintern, A. L.; Liu, M. X.; Manly, S. L.; Masuda, H.; McKemey, A. K.; Moore, T. B.; Nichols, A.; Nagamine, T.; Oishi, N.; Osborne, L. S.; Russell, J. J.; Ross, D.; Serbo, V. V.; Sinev, N. B.; Sinnott, J.; Skarpaas, K. Viii; Smy, M. B.; Snyder, J. A.; Strauss, M. G.; Dong, S.; Suekane, F.; Taylor, F. E.; Trandafir, A. I.; Usher, T.; Verdier, R.; Watts, S. J.; Weiss, E. R.; Yashima, J.; Yuta, H.; Zapalac, G.
1997-02-01
This paper describes the design, construction, and initial operation of SLD's upgraded vertex detector which comprises 96 two-dimensional charge-coupled devices (CCDs) with a total of 307 Mpixel. Each pixel functions as an independent particle detecting element, providing space point measurements of charged particle tracks with a typical precision of 4 μm in each co-ordinate. The CCDs are arranged in three concentric cylinders just outside the beam-pipe which surrounds the e +e - collision point of the SLAC Linear Collider (SLC). The detector is a powerful tool for distinguishing displaced vertex tracks, produced by decay in flight of heavy flavour hadrons or tau leptons, from tracks produced at the primary event vertex. The requirements for this detector include a very low mass structure (to minimize multiple scattering) both for mechanical support and to provide signal paths for the CCDs; operation at low temperature with a high degree of mechanical stability; and high speed CCD readout, signal processing, and data sparsification. The lessons learned in achieving these goals should be useful for the construction of large arrays of CCDs or active pixel devices in the future in a number of areas of science and technology.
Broadband Processing in a Noisy Shallow Ocean Environment: A Particle Filtering Approach
Candy, J. V.
2016-04-14
Here we report that when a broadband source propagates sound in a shallow ocean the received data can become quite complicated due to temperature-related sound-speed variations and therefore a highly dispersive environment. Noise and uncertainties disrupt this already chaotic environment even further because disturbances propagate through the same inherent acoustic channel. The broadband (signal) estimation/detection problem can be decomposed into a set of narrowband solutions that are processed separately and then combined to achieve more enhancement of signal levels than that available from a single frequency, thereby allowing more information to be extracted leading to a more reliable source detection.more » A Bayesian solution to the broadband modal function tracking, pressure-field enhancement, and source detection problem is developed that leads to nonparametric estimates of desired posterior distributions enabling the estimation of useful statistics and an improved processor/detector. In conclusion, to investigate the processor capabilities, we synthesize an ensemble of noisy, broadband, shallow-ocean measurements to evaluate its overall performance using an information theoretical metric for the preprocessor and the receiver operating characteristic curve for the detector.« less
Video sensor architecture for surveillance applications.
Sánchez, Jordi; Benet, Ginés; Simó, José E
2012-01-01
This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.
Image processing using Gallium Arsenide (GaAs) technology
NASA Technical Reports Server (NTRS)
Miller, Warner H.
1989-01-01
The need to increase the information return from space-borne imaging systems has increased in the past decade. The use of multi-spectral data has resulted in the need for finer spatial resolution and greater spectral coverage. Onboard signal processing will be necessary in order to utilize the available Tracking and Data Relay Satellite System (TDRSS) communication channel at high efficiency. A generally recognized approach to the increased efficiency of channel usage is through data compression techniques. The compression technique implemented is a differential pulse code modulation (DPCM) scheme with a non-uniform quantizer. The need to advance the state-of-the-art of onboard processing was recognized and a GaAs integrated circuit technology was chosen. An Adaptive Programmable Processor (APP) chip set was developed which is based on an 8-bit slice general processor. The reason for choosing the compression technique for the Multi-spectral Linear Array (MLA) instrument is described. Also a description is given of the GaAs integrated circuit chip set which will demonstrate that data compression can be performed onboard in real time at data rate in the order of 500 Mb/s.
Video Sensor Architecture for Surveillance Applications
Sánchez, Jordi; Benet, Ginés; Simó, José E.
2012-01-01
This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%. PMID:22438723
Impact of device level faults in a digital avionic processor
NASA Technical Reports Server (NTRS)
Suk, Ho Kim
1989-01-01
This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.
NASA Astrophysics Data System (ADS)
1992-10-01
The overall objective of the DARPA/Tri-Service RASSP program is to demonstrate a capability to rapidly specify, produce, and yield domain-specific, affordable signal processors for use in Department of Defense systems such as automatic target acquisition, tracking, and recognition, electronic countermeasures, communications, and SIGINT. The objective of the study phase is to specify a recommended program plan for the government to use as a template for procurement of the RASSP design system and demonstration program. To accomplish that objective, the study phase program tasks are to specify a development methodology for signal processors (adaptable to various organizational design styles, and application areas), analyze the requirements in CAD/CAE tools to support the development methodology, identify the state and development plans of the industry relative to this area, and to recommend the additional developments not currently being addressed by the industry, which are recommended as RASSP developments. In addition, the RASSP study phase will define a linking approach for electronically linking design centers to manufacturing centers so a complete cycle for prototyping can be accomplished with significantly reduced cycle time.
Digital Beamforming Scatterometer
NASA Technical Reports Server (NTRS)
Rincon, Rafael F.; Vega, Manuel; Kman, Luko; Buenfil, Manuel; Geist, Alessandro; Hillard, Larry; Racette, Paul
2009-01-01
This paper discusses scatterometer measurements collected with multi-mode Digital Beamforming Synthetic Aperture Radar (DBSAR) during the SMAP-VEX 2008 campaign. The 2008 SMAP Validation Experiment was conducted to address a number of specific questions related to the soil moisture retrieval algorithms. SMAP-VEX 2008 consisted on a series of aircraft-based.flights conducted on the Eastern Shore of Maryland and Delaware in the fall of 2008. Several other instruments participated in the campaign including the Passive Active L-Band System (PALS), the Marshall Airborne Polarimetric Imaging Radiometer (MAPIR), and the Global Positioning System Reflectometer (GPSR). This campaign was the first SMAP Validation Experiment. DBSAR is a multimode radar system developed at NASA/Goddard Space Flight Center that combines state-of-the-art radar technologies, on-board processing, and advances in signal processing techniques in order to enable new remote sensing capabilities applicable to Earth science and planetary applications [l]. The instrument can be configured to operate in scatterometer, Synthetic Aperture Radar (SAR), or altimeter mode. The system builds upon the L-band Imaging Scatterometer (LIS) developed as part of the RadSTAR program. The radar is a phased array system designed to fly on the NASA P3 aircraft. The instrument consists of a programmable waveform generator, eight transmit/receive (T/R) channels, a microstrip antenna, and a reconfigurable data acquisition and processor system. Each transmit channel incorporates a digital attenuator, and digital phase shifter that enables amplitude and phase modulation on transmit. The attenuators, phase shifters, and calibration switches are digitally controlled by the radar control card (RCC) on a pulse by pulse basis. The antenna is a corporate fed microstrip patch-array centered at 1.26 GHz with a 20 MHz bandwidth. Although only one feed is used with the present configuration, a provision was made for separate corporate feeds for vertical and horizontal polarization. System upgrades to dual polarization are currently under way. The DBSAR processor is a reconfigurable data acquisition and processor system capable of real-time, high-speed data processing. DBSAR uses an FPGA-based architecture to implement digitally down-conversion, in-phase and quadrature (I/Q) demodulation, and subsequent radar specific algorithms. The core of the processor board consists of an analog-to-digital (AID) section, three Altera Stratix field programmable gate arrays (FPGAs), an ARM microcontroller, several memory devices, and an Ethernet interface. The processor also interfaces with a navigation board consisting of a GPS and a MEMS gyro. The processor has been configured to operate in scatterometer, Synthetic Aperture Radar (SAR), and altimeter modes. All the modes are based on digital beamforming which is a digital process that generates the far-field beam patterns at various scan angles from voltages sampled in the antenna array. This technique allows steering the received beam and controlling its beam-width and side-lobe. Several beamforming techniques can be implemented each characterized by unique strengths and weaknesses, and each applicable to different measurement scenarios. In Scatterometer mode, the radar is capable to.generate a wide beam or scan a narrow beam on transmit, and to steer the received beam on processing while controlling its beamwidth and side-lobe level. Table I lists some important radar characteristics
Radiation Hard Silicon Particle Detectors for Phase-II LHC Trackers
NASA Astrophysics Data System (ADS)
Oblakowska-Mucha, A.
2017-02-01
The major LHC upgrade is planned after ten years of accelerator operation. It is foreseen to significantly increase the luminosity of the current machine up to 1035 cm-2s-1 and operate as the upcoming High Luminosity LHC (HL-LHC) . The major detectors upgrade, called the Phase-II Upgrade, is also planned, a main reason being the aging processes caused by severe particle radiation. Within the RD50 Collaboration, a large Research and Development program has been underway to develop silicon sensors with sufficient radiation tolerance for HL-LHC trackers. In this summary, several results obtained during the testing of the devices after irradiation to HL-LHC levels are presented. Among the studied structures, one can find advanced sensors types like 3D silicon detectors, High-Voltage CMOS technologies, or sensors with intrinsic gain (LGAD). Based on these results, the RD50 Collaboration gives recommendation for the silicon detectors to be used in the detector upgrade.
Black, Stuart; Ferrell, Jack R
2017-02-07
Carbonyl compounds present in bio-oils are known to be responsible for bio-oil property changes upon storage and during upgrading. Specifically, carbonyls cause an increase in viscosity (often referred to as 'aging') during storage of bio-oils. As such, carbonyl content has previously been used as a method of tracking bio-oil aging and condensation reactions with less variability than viscosity measurements. Additionally, carbonyls are also responsible for coke formation in bio-oil upgrading processes. Given the importance of carbonyls in bio-oils, accurate analytical methods for their quantification are very important for the bio-oil community. Potentiometric titration methods based on carbonyl oximation have long been used for the determination of carbonyl content in pyrolysis bio-oils. Here, we present a modification of the traditional carbonyl oximation procedures that results in less reaction time, smaller sample size, higher precision, and more accurate carbonyl determinations. While traditional carbonyl oximation methods occur at room temperature, the Faix method presented here occurs at an elevated temperature of 80 °C.
Development of a modular test system for the silicon sensor R&D of the ATLAS Upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H.; Benoit, M.; Chen, H.
High Voltage CMOS sensors are a promising technology for tracking detectors in collider experiments. Extensive R&D studies are being carried out by the ATLAS Collaboration for a possible use of HV-CMOS in the High Luminosity LHC upgrade of the Inner Tracker detector. CaRIBOu (Control and Readout Itk BOard) is a modular test system developed to test Silicon based detectors. It currently includes five custom designed boards, a Xilinx ZC706 development board, FELIX (Front-End LInk eXchange) PCIe card and a host computer. A software program has been developed in Python to control the CaRIBOu hardware. CaRIBOu has been used in themore » testbeam of the HV-CMOS sensor AMS180v4 at CERN. Preliminary results have shown that the test system is very versatile. In conclusion, further development is ongoing to adapt to different sensors, and to make it available to various lab test stands.« less
Development of a modular test system for the silicon sensor R&D of the ATLAS Upgrade
Liu, H.; Benoit, M.; Chen, H.; ...
2017-01-11
High Voltage CMOS sensors are a promising technology for tracking detectors in collider experiments. Extensive R&D studies are being carried out by the ATLAS Collaboration for a possible use of HV-CMOS in the High Luminosity LHC upgrade of the Inner Tracker detector. CaRIBOu (Control and Readout Itk BOard) is a modular test system developed to test Silicon based detectors. It currently includes five custom designed boards, a Xilinx ZC706 development board, FELIX (Front-End LInk eXchange) PCIe card and a host computer. A software program has been developed in Python to control the CaRIBOu hardware. CaRIBOu has been used in themore » testbeam of the HV-CMOS sensor AMS180v4 at CERN. Preliminary results have shown that the test system is very versatile. In conclusion, further development is ongoing to adapt to different sensors, and to make it available to various lab test stands.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguiam, D. E., E-mail: daguiam@ipfn.tecnico.ulisboa.pt; Silva, A.; Carvalho, P. J.
A new multichannel frequency modulated continuous-wave reflectometry diagnostic has been successfully installed and commissioned on ASDEX Upgrade to measure the plasma edge electron density profile evolution in front of the Ion Cyclotron Range of Frequencies (ICRF) antenna. The design of the new three-strap ICRF antenna integrates ten pairs (sending and receiving) of microwave reflectometry antennas. The multichannel reflectometer can use three of these to measure the edge electron density profiles up to 2 × 10{sup 19} m{sup −3}, at different poloidal locations, allowing the direct study of the local plasma layers in front of the ICRF antenna. ICRF power coupling,more » operational effects, and poloidal variations of the plasma density profile can be consistently studied for the first time. In this work the diagnostic hardware architecture is described and the obtained density profile measurements were used to track outer radial plasma position and plasma shape.« less
A new telescope control system for the Telescopio Nazionale Galileo: I - derotators
NASA Astrophysics Data System (ADS)
Ghedina, Adriano; Gonzalez, Manuel; Perez Ventura, Hector; Carmona, Candido; Riverol, Luis
2014-07-01
Telescopio Nazionale Galileo (TNG) is a 4m class active optics telescope at the observatory of Roque de Los Muchachos. In the framework of keeping optimum performances during observation and continuous reliability the telescope control system (TCS) of the TNG is going through a deep upgrade after nearly 20 years of service. The original glass encoders and bulb lamp heads are substituted with modern steel scale drums and scanning units. The obsolete electronic racks and computers for the control loops are replaced with modern and compact commercial drivers with a net improvement in the tracking error RMS. In order to minimize the impact on the number of nights lost during the mechanical and electronic changes in the TCS the new TCS is developed and tested in parallel to the existing one and three steps will be taken to achieve the full upgrade. We describe here the first step affecting the mechanical derotators at the Nasmyth foci.
Digital receiver study and implementation
NASA Technical Reports Server (NTRS)
Fogle, D. A.; Lee, G. M.; Massey, J. C.
1972-01-01
Computer software was developed which makes it possible to use any general purpose computer with A/D conversion capability as a PSK receiver for low data rate telemetry processing. Carrier tracking, bit synchronization, and matched filter detection are all performed digitally. To aid in the implementation of optimum computer processors, a study of general digital processing techniques was performed which emphasized various techniques for digitizing general analog systems. In particular, the phase-locked loop was extensively analyzed as a typical non-linear communication element. Bayesian estimation techniques for PSK demodulation were studied. A hardware implementation of the digital Costas loop was developed.
An inexpensive digital tape recorder suitable for neurophysiological signals.
Lamb, T D
1985-10-01
Modifications are described which convert an inexpensive 'Digital Audio Processor' (Sony PCM-701ES), together with a video cassette recorder, into a high performance digital tape recorder, with two analog channels of 16 bit resolution and DC-20 kHz bandwidth. A further modification is described which optionally provides four additional 1-bit digital channels by sacrificing the least significant four bits of one analog channel. If required two additional high quality analog channels may be obtained by use of one of the new video cassette recorders (such as the Sony SL-HF100) which incorporate a pair of FM tracks.
NASA Technical Reports Server (NTRS)
Stone, M. S.; Mcadam, P. L.; Saunders, O. W.
1977-01-01
The results are presented of a 4 month study to design a hybrid analog/digital receiver for outer planet mission probe communication links. The scope of this study includes functional design of the receiver; comparisons between analog and digital processing; hardware tradeoffs for key components including frequency generators, A/D converters, and digital processors; development and simulation of the processing algorithms for acquisition, tracking, and demodulation; and detailed design of the receiver in order to determine its size, weight, power, reliability, and radiation hardness. In addition, an evaluation was made of the receiver's capabilities to perform accurate measurement of signal strength and frequency for radio science missions.
System and method for modeling and analyzing complex scenarios
Shevitz, Daniel Wolf
2013-04-09
An embodiment of the present invention includes a method for analyzing and solving possibility tree. A possibility tree having a plurality of programmable nodes is constructed and solved with a solver module executed by a processor element. The solver module executes the programming of said nodes, and tracks the state of at least a variable through a branch. When a variable of said branch is out of tolerance with a parameter, the solver disables remaining nodes of the branch and marks the branch as an invalid solution. The valid solutions are then aggregated and displayed as valid tree solutions.
2005-05-01
simulée d’essai pour obtenir les diagrammes de perte de transmission et de réverbération pour 18 éléments (une source, un réseau remorqué et 16 bouées...were recorded using a 1.5GHz Pentium 4 processor. The test results indicate that the Bellhop program runs fast enough to provide the required acoustic...was determined that the Bellhop program will be fast enough for these clients. Future Plans It is intended to integrate further enhancements that
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdick, A.
This report outlines findings resulting from a U.S. Department of Energy Building America expert meeting to determine how HVAC companies can transition from a traditional contractor status to a service provider for whole house energy upgrade contracting. IBACOS has embarked upon a research effort under the Building America Program to understand business impacts and change management strategies for HVAC companies. HVAC companies can implement these strategies in order to quickly transition from a 'traditional' heating and cooling contractor to a service provider for whole house energy upgrade contracting. Due to HVAC service contracts, which allow repeat interaction with homeowners, HVACmore » companies are ideally positioned in the marketplace to resolve homeowner comfort issues through whole house energy upgrades. There are essentially two primary ways to define the routes of transition for an HVAC contractor taking on whole house performance contracting: (1) Sub-contracting out the shell repair/upgrade work; and (2) Integrating the shell repair/upgrade work into their existing business. IBACOS held an Expert Meeting on the topic of Transitioning Traditional HVAC Contractors to Whole House Performance Contractors on March 29, 2011 in San Francisco, CA. The major objectives of the meeting were to: Review and validate the general business models for traditional HVAC companies and whole house energy upgrade companies Review preliminary findings on the differences between the structure of traditional HVAC Companies and whole house energy upgrade companies Seek industry input on how to structure information so it is relevant and useful for traditional HVAC contractors who are transitioning to becoming whole house energy upgrade contractors Seven industry experts identified by IBACOS participated in the session along with one representative from the National Renewable Energy Laboratory (NREL). The objective of the meeting was to validate the general operational profile of an integrated whole house performance contracting company and identify the most significant challenges facing a traditional HVAC contractor looking to transition to a whole house performance contractor. To facilitate the discussion, IBACOS divided the business operations profile of a typical integrated whole house performance contracting company (one that performs both HVAC and shell repair/upgrade work) into seven Operational Areas with more detailed Business Functions and Work Activities falling under each high-level Operational Area. The expert panel was asked to review the operational profile or 'map' of the Business Functions. The specific Work Activities within the Business Functions identified as potential transition barriers were rated by the group relative to the value in IBACOS creating guidance ensuring a successful transition and the relative difficulty in executing.« less
NASA Astrophysics Data System (ADS)
Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.
2016-03-01
Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.
Pedretti, Kevin
2008-11-18
A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.
Orthorectification by Using Gpgpu Method
NASA Astrophysics Data System (ADS)
Sahin, H.; Kulur, S.
2012-07-01
Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.
Organizational Design for USSOCOM Rapid Acquisition
2017-03-31
cycle times, leaving operators at a strategic disadvantage. It seeks to understand how it can adapt and upgrade at the speed of the commercial market ...technologies at the speed of the commercial market . Despite SOF AT&L’s innovative approaches, it still finds itself leaving SOF operators potentially at a...how it can adapt to take advantage of technology advances and upgrade its technologies at the speed of the commercial market . Its goal is to adapt
Tracking on non-active collaborative objects from San Fernando Laser station
NASA Astrophysics Data System (ADS)
Catalán, Manuel; Quijano, Manuel; Cortina, Luis M.; Pazos, Antonio A.; Martín-Davila, José
2016-04-01
The Royal Observatory of the Spanish Navy (ROA) works on satellite geodesy from the early days of the space age, when the first artificial satellite tracking telescope was installed in 1958: the Baker-Nunn camera. In 1975 a French satellite Laser ranging (SLR) station was installed and operated at ROA . Since 1980, ROA has been operating this instrument which was upgraded to a third generation and it is still keep into a continuous update to reach the highest level of operability. Since then ROA has participated in different space geodesy campaigns through the International Laser Service Stations (ILRS) or its European regional organization (EUROLAS), tracking a number of artificial satellites types : ERS, ENVISAT, LAGEOS, TOPEX- POSEIDON to name but a few. Recently we opened a new field of research: space debris tracking, which is receiving increasing importance and attention from international space agencies. The main problem is the relatively low accuracy of common used methods. It is clear that improving the predicted orbit accuracy is necessary to fulfill our aims (avoiding unnecessary anti-collision maneuvers,..). Following results obtained by other colleagues (Austria, China, USA,...) we proposed to share our time-schedule using our satellite ranging station to obtain data which will make orbital elements predictions far more accurate (sub-meter accuracy), while we still keep our tracking routines over active satellites. In this communication we report the actions fulfill until nowadays.
GPU-accelerated track reconstruction in the ALICE High Level Trigger
NASA Astrophysics Data System (ADS)
Rohr, David; Gorbunov, Sergey; Lindenstruth, Volker;
2017-10-01
ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. The High Level Trigger (HLT) is an online compute farm which reconstructs events measured by the ALICE detector in real-time. The most compute-intensive part is the reconstruction of particle trajectories called tracking and the most important detector for tracking is the Time Projection Chamber (TPC). The HLT uses a GPU-accelerated algorithm for TPC tracking that is based on the Cellular Automaton principle and on the Kalman filter. The GPU tracking has been running in 24/7 operation since 2012 in LHC Run 1 and 2. In order to better leverage the potential of the GPUs, and speed up the overall HLT reconstruction, we plan to bring more reconstruction steps (e.g. the tracking for other detectors) onto the GPUs. There are several tasks running so far on the CPU that could benefit from cooperation with the tracking, which is hardly feasible at the moment due to the delay of the PCI Express transfers. Moving more steps onto the GPU, and processing them on the GPU at once, will reduce PCI Express transfers and free up CPU resources. On top of that, modern GPUs and GPU programming APIs provide new features which are not yet exploited by the TPC tracking. We present our new developments for GPU reconstruction, both with a focus on the online reconstruction on GPU for the online offline computing upgrade in ALICE during LHC Run 3, and also taking into account how the current HLT in Run 2 can profit from these improvements.
NASA Astrophysics Data System (ADS)
Abt, I.; Ahmed, T.; Aid, S.; Andreev, V.; Andrieu, B.; Appuhn, R. D.; Arnault, C.; Arpagaus, M.; Babaev, A.; Bärwolff, H.; Bán, J.; Banas, E.; Baranov, P.; Barrelet, E.; Bartel, W.; Barth, M.; Bassler, U.; Basti, F.; Baynham, D. E.; Baze, J.-M.; Beck, G. A.; Beck, H. P.; Bederede, D.; Behrend, H.-J.; Beigbeder, C.; Belousov, A.; Berger, Ch.; Bergstein, H.; Bernard, R.; Bernardi, G.; Bernet, R.; Bernier, R.; Berthon, U.; Bertrand-Coremans, G.; Besançon, M.; Beyer, R.; Biasci, J.-C.; Biddulph, P.; Bidoli, V.; Binder, E.; Binko, P.; Bizot, J.-C.; Blobel, V.; Blouzon, F.; Blume, H.; Borras, K.; Boudry, V.; Bourdarios, C.; Brasse, F.; Braunschweig, W.; Breton, D.; Brettel, H.; Brisson, V.; Bruncko, D.; Brune, C.; Buchner, U.; Büngener, L.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Burmeister, P.; Busata, A.; Buschhorn, G.; Campbell, A. J.; Carli, T.; Charles, F.; Charlet, M.; Chase, R.; Clarke, D.; Clegg, A. B.; Colombo, M.; Commichau, V.; Connolly, J. F.; Cornett, U.; Coughlan, J. A.; Courau, A.; Cousinou, M.-C.; Coutures, Ch.; Coville, A.; Cozzika, G.; Cragg, D. A.; Criegee, L.; Cronström, H. I.; Cunliffe, N. H.; Cvach, J.; Cyz, A.; Dagoret, S.; Dainton, J. B.; Danilov, M.; Dann, A. W. E.; Darvill, D.; Dau, W. D.; David, J.; David, M.; Day, R. J.; Deffur, E.; Delcourt, B.; Del Buono, L.; Descamps, F.; Devel, M.; Dewulf, J. P.; De Roeck, A.; Dingus, P.; Djidi, K.; Dollfus, C.; Dowell, J. D.; Dreis, H. B.; Drescher, A.; Dretzler, U.; Duboc, J.; Ducorps, A.; Düllmann, D.; Dünger, O.; Duhm, H.; Dulny, B.; Dupont, F.; Ebbinghaus, R.; Eberle, M.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Edwards, B. W. H.; Efremenko, V.; Egli, S.; Eichenberger, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Ellis, N. N.; Ellison, R. J.; Elsen, E.; Epifantsev, A.; Erdmann, M.; Erdmann, W.; Ernst, G.; Evrard, E.; Falley, G.; Favart, L.; Fedotov, A.; Feeken, D.; Felst, R.; Feltesse, J.; Feng, Z. Y.; Fensome, I. F.; Fent, J.; Ferencei, J.; Ferrarotto, F.; Finke, K.; Flamm, K.; Flauger, W.; Fleischer, M.; Flieser, M.; Flower, P. S.; Flügge, G.; Fomenko, A.; Fominykh, B.; Forbush, M.; Formánek, J.; Foster, J. M.; Franke, G.; Fretwurst, E.; Fröchtenicht, W.; Fuhrmann, P.; Gabathuler, E.; Gabathuler, K.; Gadow, K.; Gamerdinger, K.; Garvey, J.; Gayler, J.; Gažo, E.; Gellrich, A.; Gennis, M.; Gensch, U.; Genzel, H.; Gerhards, R.; Geske, K.; Giesgen, I.; Gillespie, D.; Glasgow, W.; Godfrey, L.; Godlewski, J.; Goerlach, U.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Goodall, A. M.; Gorelov, I.; Goritchev, P.; Gosset, L.; Grab, C.; Grässler, H.; Grässler, R.; Greenshaw, T.; Gregory, C.; Greif, H.; Grewe, M.; Grindhammer, G.; Gruber, A.; Gruber, C.; Günther, S.; Haack, J.; Haguenauer, M.; Haidt, D.; Hajduk, L.; Hammer, D.; Hamon, O.; Hampel, M.; Handschuh, D.; Hangarter, K.; Hanlon, E. M.; Hapke, M.; Harder, U.; Harjes, J.; Hartz, P.; Hatton, P. E.; Haydar, R.; Haynes, W. J.; Heatherington, J.; Hedberg, V.; Hedgecock, C. R.; Heinzelmann, G.; Henderson, R. C. W.; Henschel, H.; Herma, R.; Herynek, I.; Hildesheim, W.; Hill, P.; Hill, D. L.; Hilton, C. D.; Hladký, J.; Hoeger, K. C.; Hopes, R. B.; Horisberger, R.; Hrisoho, A.; Huber, J.; Huet, Ph.; Hufnagel, H.; Huot, N.; Huppert, J.-F.; Ibbotson, M.; Imbault, D.; Itterbeck, H.; Jabiol, M.-A.; Jacholkowska, A.; Jacobsson, C.; Jaffré, M.; Jansen, T.; Jean, P.; Jeanjean, J.; Jönsson, L.; Johannsen, K.; Johnson, D. P.; Johnson, L.; Jovanovic, P.; Jung, H.; Kalmus, P. I. P.; Kant, D.; Kantel, G.; Karstensen, S.; Kasarian, S.; Kaschowitz, R.; Kasselmann, P.; Kathage, U.; Kaufmann, H. H.; Kemmerling, G.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Ko, W.; Kobler, T.; Koch, J.; Köhler, T.; Köhne, J.; Kolander, M.; Kolanoski, H.; Kole, F.; Koll, J.; Kolya, S. D.; Koppitz, B.; Korbel, V.; Korn, M.; Kostka, P.; Kotelnikov, S. K.; Krasny, M. W.; Krehbiel, H.; Krivan, F.; Krücker, D.; Krüger, U.; Krüner-Marquis, U.; Kubantsev, M.; Kubenka, J. P.; Külper, T.; Küsel, H.-J.; Küster, H.; Kuhlen, M.; Kurča, T.; Kurzhöfer, J.; Kuznik, B.; Laforge, B.; Lamarche, F.; Lander, R.; Landon, M. P. J.; Lange, W.; Lange, W.; Langkau, R.; Lanius, P.; Laporte, J.-F.; Laptin, L.; Laskus, H.; Lebedev, A.; Lemler, M.; Lenhardt, U.; Leuschner, A.; Leverenz, C.; Levonian, S.; Lewin, D.; Ley, Ch.; Lindner, A.; Lindström, G.; Linsel, F.; Lipinski, J.; Liss, B.; Loch, P.; Lodge, A. B.; Lohmander, H.; Lopez, G. C.; Lottin, J.-P.; Lubimov, V.; Ludwig, K.; Lüers, D.; Lugetski, N.; Lundberg, B.; Maeshima, K.; Magnussen, N.; Malinovski, E.; Mani, S.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, F.; Martin, G.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Masbender, V.; Masson, S.; Mavroidis, A.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Meissner, J.; Mercer, D.; Merz, T.; Meyer, C. A.; Meyer, H.; Meyer, J.; Mikocki, S.; Mills, J. L.; Milone, V.; Möck, J.; Monnier, E.; Montés, B.; Moreau, F.; Moreels, J.; Morgan, B.; Morris, J. V.; Morton, J. M.; Müller, K.; Murín, P.; Murray, S. A.; Nagovizin, V.; Naroska, B.; Naumann, Th.; Nayman, P.; Nepeipivo, A.; Newman, P.; Newman-Coburn, D.; Newton, D.; Neyret, D.; Nguyen, H. K.; Niebergall, F.; Niebuhr, C.; Nisius, R.; Novák, T.; Nováková, H.; Nowak, G.; Noyes, G. W.; Nyberg, M.; Oberlack, H.; Obrock, U.; Olsson, J. E.; Olszowska, J.; Orenstein, S.; Ould-Saada, F.; Pailler, P.; Palanque, S.; Panaro, E.; Panitch, A.; Parey, J.-Y.; Pascaud, C.; Patel, G. D.; Patoux, A.; Paulot, C.; Pein, U.; Peppel, E.; Perez, E.; Perrodo, P.; Perus, A.; Peters, S.; Pharabod, J.-P.; Phillips, H. T.; Phillips, J. P.; Pichler, Ch.; Pieuchot, A.; Pimpl, W.; Pitzl, D.; Porrovecchio, A.; Prell, S.; Prosi, R.; Quehl, H.; Rädel, G.; Raupach, F.; Rauschnabel, K.; Reboux, A.; Reimer, P.; Reinmuth, G.; Reinshagen, S.; Ribarics, P.; Riech, V.; Riedlberger, J.; Riege, H.; Riess, S.; Rietz, M.; Robertson, S. M.; Robmann, P.; Röpnack, P.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Royon, C.; Rudge, A.; Rüter, K.; Rudowicz, M.; Ruffer, M.; Rusakov, S.; Rusinov, V.; Rybicki, K.; Sacton, J.; Sahlmann, N.; Sanchez, E.; Sankey, D. P. C.; Savitski, M.; Schacht, P.; Schiek, S.; Schirm, N.; Schleif, S.; Schleper, P.; von Schlippe, W.; Schmidt, C.; Schmidt, D.; Schmidt, G.; Schmitz, W.; Schmücker, H.; Schröder, V.; Schütt, J.; Schuhmann, E.; Schulz, M.; Schwind, A.; Scobel, W.; Seehausen, U.; Sefkow, F.; Sell, R.; Seman, M.; Semenov, A.; Shatalov, P.; Shekelyan, V.; Sheviakov, I.; Shooshtari, H.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Sirous, A.; Skillicorn, I. O.; Škvařil, P.; Smirnov, P.; Smith, J. R.; Smolik, L.; Sole, D.; Soloviev, Y.; Špalek, J.; Spitzer, H.; von Staa, R.; Staeck, J.; Staroba, P.; Šťastný, J.; Steenbock, M.; Štefan, P.; Steffen, P.; Steinberg, R.; Steiner, H.; Stella, B.; Stephens, K.; Stier, J.; Stiewe, J.; Stösslein, U.; Strachota, J.; Straumann, U.; Strowbridge, A.; Struczinski, W.; Sutton, J. P.; Szkutnik, Z.; Tappern, G.; Tapprogge, S.; Taylor, R. E.; Tchernyshov, V.; Tchudakov, V.; Thiebaux, C.; Thiele, K.; Thompson, G.; Thompson, R. J.; Tichomirov, I.; Trenkel, C.; Tribanek, W.; Tröger, K.; Truöl, P.; Turiot, M.; Turnau, J.; Tutas, J.; Urban, L.; Urban, M.; Usik, A.; Valkár, Š.; Valkárová, A.; Vallée, C.; Van Beek, G.; Vanderkelen, M.; Van Lancker, L.; Van Mechelen, P.; Vartapetian, A.; Vazdik, Y.; Vecko, M.; Verrecchia, P.; Vick, R.; Villet, G.; Vogel, E.; Wacker, K.; Wagener, M.; Walker, I. W.; Walther, A.; Weber, G.; Wegener, D.; Wegner, A.; Weissbach, P.; Wellisch, H. P.; West, L.; White, D.; Willard, S.; Winde, M.; Winter, G.-G.; Wolff, Th.; Womersley, L. A.; Wright, A. E.; Wünsch, E.; Wulff, N.; Wyborn, B. E.; Yiou, T. P.; Žáček, J.; Zarbock, D.; Závada, P.; Zeitnitz, C.; Zhang, Z.; Ziaeepour, H.; Zimmer, M.; Zimmermann, W.; Zomer, F.; Zuber, K.; H1 Collaboration
1997-02-01
General aspects of the H1 detector at the electron-proton storage ring HERA as well as technical descriptions of the magnet, luminosity system, trigger, slow-control, data acquisition and off-line data handling are given. The three major components of the detector, the tracking, calorimeter and muon detectors, will be described in a forthcoming article. The present paper describes the detector that was used from 1992 to the end of 1994. After this a major upgrade of some components was undertaken. Some performance figures from luminosity runs at HERA during 1993 and 1994 are given.
Open Heavy Flavor and Quarkonia Results at RHIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nouicer, Rachid
RHIC experiments carry out a comprehensive physics program which studies open heavy flavor and quarkonium production in relativistic heavy-ion collisions. The discovery at RHIC of large high-pT suppression and flow of electrons from heavy quarks flavors have altered our view of the hot and dense matter formed in central Au + Au collisions at √S NN = 200 GeV. These results suggest a large energy loss and flow of heavy quarks in the hot, dense matter. In recent years, the RHIC experiments upgraded the detectors; (1) PHENIX Collaboration installed silicon vertex tracker (VTX) at mid-rapidity region and forward silicon vertexmore » tracker (FVTX) at the forward rapidity region, and (2) STAR Collaboration installed the heavy flavor tracker (HFT) and the muon telescope detector (MTD) both at the mid-rapidity region. With these new upgrades, both experiments have collected large data samples. These new detectors enhance the capability of heavy flavor measurements via precision tracking. The PHENIX experiments established measurements of ψ(1S) and ψ(2S) production as a function of system size, p + p, p + Al, p + Au, and 3He + Au collisions at √S NN = 200 GeV. In p/ 3He + A collisions at forward rapidity, we observe no difference in the ψ(2S)/ψ(1S) ratio relative to p + p collisions. At backward rapidity, where the comoving particle density is higher, we find that the ψ(2S) is preferentially suppressed by a factor of two. STAR Collaboration presents the first J/ψ and Υ measurements in the di-muon decay channel in Au + Au collisions at GeV at mid-rapidity at RHIC. Here, we observe clear J/ψ RAA suppression and qualitatively well described by transport models simultaneously accounting for dissociation and regeneration processes.« less
Open Heavy Flavor and Quarkonia Results at RHIC
NASA Astrophysics Data System (ADS)
Nouicer, Rachid
2017-12-01
RHIC experiments carry out a comprehensive physics program which studies open heavy flavor and quarkonium production in relativistic heavy-ion collisions. The discovery at RHIC of large high-pT suppression and flow of electrons from heavy quarks flavors have altered our view of the hot and dense matter formed in central Au + Au collisions at GeV. These results suggest a large energy loss and flow of heavy quarks in the hot, dense matter. In recent years, the RHIC experiments upgraded the detectors; (1) PHENIX Collaboration installed silicon vertex tracker (VTX) at mid-rapidity region and forward silicon vertex tracker (FVTX) at the forward rapidity region, and (2) STAR Collaboration installed the heavy flavor tracker (HFT) and the muon telescope detector (MTD) both at the mid-rapidity region. With these new upgrades, both experiments have collected large data samples. These new detectors enhance the capability of heavy flavor measurements via precision tracking. The PHENIX experiments established measurements of ψ(1S) and ψ(2S) production as a function of system size, p + p, p + Al, p + Au, and 3He + Au collisions at GeV. In p/3He + A collisions at forward rapidity, we observe no difference in the ψ(2S)/ψ(1S) ratio relative to p + p collisions. At backward rapidity, where the comoving particle density is higher, we find that the ψ(2S) is preferentially suppressed by a factor of two. STAR Collaboration presents the first J/ψ and ϒ measurements in the di-muon decay channel in Au + Au collisions at GeV at mid-rapidity at RHIC. We observe clear J/ψ RAA suppression and qualitatively well described by transport models simultaneously accounting for dissociation and regeneration processes.
Open Heavy Flavor and Quarkonia Results at RHIC
Nouicer, Rachid
2017-12-05
RHIC experiments carry out a comprehensive physics program which studies open heavy flavor and quarkonium production in relativistic heavy-ion collisions. The discovery at RHIC of large high-pT suppression and flow of electrons from heavy quarks flavors have altered our view of the hot and dense matter formed in central Au + Au collisions at √S NN = 200 GeV. These results suggest a large energy loss and flow of heavy quarks in the hot, dense matter. In recent years, the RHIC experiments upgraded the detectors; (1) PHENIX Collaboration installed silicon vertex tracker (VTX) at mid-rapidity region and forward silicon vertexmore » tracker (FVTX) at the forward rapidity region, and (2) STAR Collaboration installed the heavy flavor tracker (HFT) and the muon telescope detector (MTD) both at the mid-rapidity region. With these new upgrades, both experiments have collected large data samples. These new detectors enhance the capability of heavy flavor measurements via precision tracking. The PHENIX experiments established measurements of ψ(1S) and ψ(2S) production as a function of system size, p + p, p + Al, p + Au, and 3He + Au collisions at √S NN = 200 GeV. In p/ 3He + A collisions at forward rapidity, we observe no difference in the ψ(2S)/ψ(1S) ratio relative to p + p collisions. At backward rapidity, where the comoving particle density is higher, we find that the ψ(2S) is preferentially suppressed by a factor of two. STAR Collaboration presents the first J/ψ and Υ measurements in the di-muon decay channel in Au + Au collisions at GeV at mid-rapidity at RHIC. Here, we observe clear J/ψ RAA suppression and qualitatively well described by transport models simultaneously accounting for dissociation and regeneration processes.« less
Space Shuttle Star Tracker Challenges
NASA Technical Reports Server (NTRS)
Herrera, Linda M.
2010-01-01
The space shuttle fleet of avionics was originally designed in the 1970's. Many of the subsystems have been upgraded and replaced, however some original hardware continues to fly. Not only fly, but has proven to be the best design available to perform its designated task. The shuttle star tracker system is currently flying as a mixture of old and new designs, each with a unique purpose to fill for the mission. Orbiter missions have tackled many varied missions in space over the years. As the orbiters began flying to the International Space Station (ISS), new challenges were discovered and overcome as new trusses and modules were added. For the star tracker subsystem, the growing ISS posed an unusual problem, bright light. With two star trackers on board, the 1970's vintage image dissector tube (IDT) star trackers track the ISS, while the new solid state design is used for dim star tracking. This presentation focuses on the challenges and solutions used to ensure star trackers can complete the shuttle missions successfully. Topics include KSC team and industry partner methods used to correct pressurized case failures and track system performance.
Fast front-end electronics for semiconductor tracking detectors: Trends and perspectives
NASA Astrophysics Data System (ADS)
Rivetti, Angelo
2014-11-01
In the past few years, extensive research efforts pursued by both the industry and the academia have lead to major improvements in the performance of Analog to Digital Converters (ADCs) and Time to Digital Converters (TDCs). ADCs achieving 8-10 bit resolution, 50-100 MHz conversion frequency and less than 1 mW power consumption are the today's standard, while TDCs have reached sub-picosecond time resolution. These results have been made possible by architectural upgrades combined with the use of ultra deep submicron CMOS technologies with minimum feature size of 130 nm or smaller. Front-end ASICs in which a prompt digitization is followed by signal conditioning in the digital domain can now be envisaged also within the tight power budget typically available in high density tracking systems. Furthermore, tracking detectors embedding high resolution timing capabilities are gaining interest. In the paper, ADC's and TDC's developments which are of particular relevance for the design front-end electronics for semiconductor trackers are discussed along with the benefits and challenges of exploiting such high performance building blocks in implementing the next generation of ASICs for high granularity particle detectors.
NASA Astrophysics Data System (ADS)
Tsui, Eddy K.; Thomas, Russell L.
2004-09-01
As part of the Commanding General of Army Material Command's Research, Development & Engineering Command (RDECOM), the U.S. Army Research Development and Engineering Center (ARDEC), Picatinny funded a joint development effort with McQ Associates, Inc. to develop an Advanced Minefield Sensor (AMS) as a technology evaluation prototype for the Anti-Personnel Landmine Alternatives (APLA) Track III program. This effort laid the fundamental groundwork of smart sensors for detection and classification of targets, identification of combatant or noncombatant, target location and tracking at and between sensors, fusion of information across targets and sensors, and automatic situation awareness to the 1st responder. The efforts have culminated in developing a performance oriented architecture meeting the requirements of size, weight, and power (SWAP). The integrated digital signal processor (DSP) paradigm is capable of computing signals from sensor modalities to extract needed information within either a 360° or fixed field of view with acceptable false alarm rate. This paper discusses the challenges in the developments of such a sensor, focusing on achieving reasonable operating ranges, achieving low power, small size and low cost, and applications for extensions of this technology.
Bai, Mingsian R; Pan, Weichi; Chen, Hungyu
2018-03-01
Active noise control (ANC) of headsets is revisited in this paper. An in-depth electroacoustic analysis of the combined loudspeaker-cavity headset system is conducted on the basis of electro-mechano-acoustical analogous circuits. Model matching of the primary path and the secondary path leads to a feedforward control architecture. The ideal controller sheds some light on the key parameters that affect the noise reduction performance. Filtered-X least-mean-squares algorithm is employed to implement the feedforward controller on a digital signal processor. Since the relative delay of the primary path and the secondary path is crucial to the noise reduction performance, multirate signal processing with polyphase implementation is utilized to minimize the effective analog-digital conversion delay in the secondary path. Ad hoc decimation and interpolation filters are designed in order not to introduce excessive phase delays at the cutoff. Real-time experiments are undertaken to validate the implemented ANC system. Listening tests are also conducted to compare the fixed controller and the adaptive controller in terms of noise reduction and signal tracking performance for three noise types. The results have demonstrated that the fixed feedforward controller achieved satisfactory noise reduction performance and signal tracking quality.
Ok, Seung-Ho; Lee, Yong-Hwan; Shim, Jae Hoon; Lim, Sung Kyu; Moon, Byungin
2017-02-22
Recently, stereo matching processors have been adopted in real-time embedded systems such as intelligent robots and autonomous vehicles, which require minimal hardware resources and low power consumption. Meanwhile, thanks to the through-silicon via (TSV), three-dimensional (3D) stacking technology has emerged as a practical solution to achieving the desired requirements of a high-performance circuit. In this paper, we present the benefits of 3D stacking and process technology scaling on stereo matching processors. We implemented 2-tier 3D-stacked stereo matching processors with GlobalFoundries 130-nm and Nangate 45-nm process design kits and compare them with their two-dimensional (2D) counterparts to identify comprehensive design benefits. In addition, we examine the findings from various analyses to identify the power benefits of 3D-stacked integrated circuit (IC) and device technology advancements. From experiments, we observe that the proposed 3D-stacked ICs, compared to their 2D IC counterparts, obtain 43% area, 13% power, and 14% wire length reductions. In addition, we present a logic partitioning method suitable for a pipeline-based hardware architecture that minimizes the use of TSVs.
The Impact of 3D Stacking and Technology Scaling on the Power and Area of Stereo Matching Processors
Ok, Seung-Ho; Lee, Yong-Hwan; Shim, Jae Hoon; Lim, Sung Kyu; Moon, Byungin
2017-01-01
Recently, stereo matching processors have been adopted in real-time embedded systems such as intelligent robots and autonomous vehicles, which require minimal hardware resources and low power consumption. Meanwhile, thanks to the through-silicon via (TSV), three-dimensional (3D) stacking technology has emerged as a practical solution to achieving the desired requirements of a high-performance circuit. In this paper, we present the benefits of 3D stacking and process technology scaling on stereo matching processors. We implemented 2-tier 3D-stacked stereo matching processors with GlobalFoundries 130-nm and Nangate 45-nm process design kits and compare them with their two-dimensional (2D) counterparts to identify comprehensive design benefits. In addition, we examine the findings from various analyses to identify the power benefits of 3D-stacked integrated circuit (IC) and device technology advancements. From experiments, we observe that the proposed 3D-stacked ICs, compared to their 2D IC counterparts, obtain 43% area, 13% power, and 14% wire length reductions. In addition, we present a logic partitioning method suitable for a pipeline-based hardware architecture that minimizes the use of TSVs. PMID:28241437
Graphical processors for HEP trigger systems
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-02-01
General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to employ GPUs as accelerators in offline computations. With the steady decrease of GPU latencies and the increase in link and memory throughputs, time is ripe for real-time applications using GPUs in high-energy physics data acquisition and trigger systems. We will discuss the use of online parallel computing on GPUs for synchronous low level trigger systems, focusing on tests performed on the trigger of the CERN NA62 experiment. Latencies of all components need analysing, networking being the most critical. To keep it under control, we envisioned NaNet, an FPGA-based PCIe Network Interface Card (NIC) enabling GPUDirect connection. Moreover, we discuss how specific trigger algorithms can be parallelised and thus benefit from a GPU implementation, in terms of increased execution speed. Such improvements are particularly relevant for the foreseen LHC luminosity upgrade where highly selective algorithms will be crucial to maintain sustainable trigger rates with very high pileup.
The PALM-3000 high-order adaptive optics system for Palomar Observatory
NASA Astrophysics Data System (ADS)
Bouchez, Antonin H.; Dekany, Richard G.; Angione, John R.; Baranec, Christoph; Britton, Matthew C.; Bui, Khanh; Burruss, Rick S.; Cromer, John L.; Guiwits, Stephen R.; Henning, John R.; Hickey, Jeff; McKenna, Daniel L.; Moore, Anna M.; Roberts, Jennifer E.; Trinh, Thang Q.; Troy, Mitchell; Truong, Tuan N.; Velur, Viswa
2008-07-01
Deployed as a multi-user shared facility on the 5.1 meter Hale Telescope at Palomar Observatory, the PALM-3000 highorder upgrade to the successful Palomar Adaptive Optics System will deliver extreme AO correction in the near-infrared, and diffraction-limited images down to visible wavelengths, using both natural and sodium laser guide stars. Wavefront control will be provided by two deformable mirrors, a 3368 active actuator woofer and 349 active actuator tweeter, controlled at up to 3 kHz using an innovative wavefront processor based on a cluster of 17 graphics processing units. A Shack-Hartmann wavefront sensor with selectable pupil sampling will provide high-order wavefront sensing, while an infrared tip/tilt sensor and visible truth wavefront sensor will provide low-order LGS control. Four back-end instruments are planned at first light: the PHARO near-infrared camera/spectrograph, the SWIFT visible light integral field spectrograph, Project 1640, a near-infrared coronagraphic integral field spectrograph, and 888Cam, a high-resolution visible light imager.
Kumar, Sudhir; Stecher, Glen; Peterson, Daniel; Tamura, Koichiro
2012-10-15
There is a growing need in the research community to apply the molecular evolutionary genetics analysis (MEGA) software tool for batch processing a large number of datasets and to integrate it into analysis workflows. Therefore, we now make available the computing core of the MEGA software as a stand-alone executable (MEGA-CC), along with an analysis prototyper (MEGA-Proto). MEGA-CC provides users with access to all the computational analyses available through MEGA's graphical user interface version. This includes methods for multiple sequence alignment, substitution model selection, evolutionary distance estimation, phylogeny inference, substitution rate and pattern estimation, tests of natural selection and ancestral sequence inference. Additionally, we have upgraded the source code for phylogenetic analysis using the maximum likelihood methods for parallel execution on multiple processors and cores. Here, we describe MEGA-CC and outline the steps for using MEGA-CC in tandem with MEGA-Proto for iterative and automated data analysis. http://www.megasoftware.net/.
Joint Optics Structures Experiment (JOSE)
NASA Technical Reports Server (NTRS)
Founds, David
1987-01-01
The objectives of the JOSE program is to develop, demonstrate, and evaluate active vibration suppression techniques for Directed Energy Weapons (DEW). DEW system performance is highly influenced by the line-of-sight (LOS) stability and in some cases by the wave front quality. The missions envisioned for DEW systems by the Strategic Defense Initiative require LOS stability and wave front quality to be significantly improved over any current demonstrated capability. The Active Control of Space Structures (ACOSS) program led to the development of a number of promising structural control techniques. DEW structures are vastly more complex than any structures controlled to date. They will be subject to disturbances with significantly higher magnitudes and wider bandwidths, while holding higher tolerances on allowable motions and deformations. Meeting the performance requirements of the JOSE program requires upgrading the ACOSS techniques to meet new more stringent requirements, the development of requisite sensors and acturators, improved control processors, highly accurate system identification methods, and the integration of hardware and methodologies into a successful demonstration.
STAR Online Framework: from Metadata Collection to Event Analysis and System Control
NASA Astrophysics Data System (ADS)
Arkhipkin, D.; Lauret, J.
2015-05-01
In preparation for the new era of RHIC running (RHIC-II upgrades and possibly, the eRHIC era), the STAR experiment is expanding its modular Message Interface and Reliable Architecture framework (MIRA). MIRA allowed STAR to integrate meta-data collection, monitoring, and online QA components in a very agile and efficient manner using a messaging infrastructure approach. In this paper, we briefly summarize our past achievements, provide an overview of the recent development activities focused on messaging patterns and describe our experience with the complex event processor (CEP) recently integrated into the MIRA framework. CEP was used in the recent RHIC Run 14, which provided practical use cases. Finally, we present our requirements and expectations for the planned expansion of our systems, which will allow our framework to acquire features typically associated with Detector Control Systems. Special attention is given to aspects related to latency, scalability and interoperability within heterogeneous set of services, various data and meta-data acquisition components coexisting in STAR online domain.
MIDAS: Lessons learned from the first spaceborne atomic force microscope
NASA Astrophysics Data System (ADS)
Bentley, Mark Stephen; Arends, Herman; Butler, Bart; Gavira, Jose; Jeszenszky, Harald; Mannel, Thurid; Romstedt, Jens; Schmied, Roland; Torkar, Klaus
2016-08-01
The Micro-Imaging Dust Analysis System (MIDAS) atomic force microscope (AFM) onboard the Rosetta orbiter was the first such instrument launched into space in 2004. Designed only a few years after the technique was invented, MIDAS is currently orbiting comet 67P Churyumov-Gerasimenko and producing the highest resolution 3D images of cometary dust ever made in situ. After more than a year of continuous operation much experience has been gained with this novel instrument. Coupled with operations of the Flight Spare and advances in terrestrial AFM a set of "lessons learned" has been produced, cumulating in recommendations for future spaceborne atomic force microscopes. The majority of the design could be reused as-is, or with incremental upgrades to include more modern components (e.g. the processor). Key additional recommendations are to incorporate an optical microscope to aid the search for particles and image registration, to include a variety of cantilevers (with different spring constants) and a variety of tip geometries.
NASA Astrophysics Data System (ADS)
Mayangsari, W.; Prasetyo, A. B.; Prasetiyo, Puguh
2018-04-01
Limonite nickel ore has potency to utilize as raw material for ferronickel or nickel matte, since it has low grade nickel content, thus process development is needed to find the acceptable process for upgrading nickel. The aim of this research is to determine upgrading of Ni content as result of selective reduction of limonite nickel pellet continued by magnetic separation as effect of temperature and time reduction as well as coal and CaSO4 addition. There are four steps to perform this research, such as preparation including characterization of raw ore and pelletization, selective reduction, magnetic separation and characterization of products by using AAS, XRD and SEM. Based on the result study, pellet form can upgrade 77.78% higher than powder form. Upgrading of Ni and Fe content was up to 3fold and 1.5fold respectively from raw ore used when reduced at 1100°C for 60 minutes with composition of coal and CaSO4, both 10%. The excess of CaSO4 addition caused fayalite formation. Moreover, S2 from CaSO4 also support to reach low melting point and enlardge particle size of metal formed.
STAR: FPGA-based software defined satellite transponder
NASA Astrophysics Data System (ADS)
Davalle, Daniele; Cassettari, Riccardo; Saponara, Sergio; Fanucci, Luca; Cucchi, Luca; Bigongiari, Franco; Errico, Walter
2013-05-01
This paper presents STAR, a flexible Telemetry, Tracking & Command (TT&C) transponder for Earth Observation (EO) small satellites, developed in collaboration with INTECS and SITAEL companies. With respect to state-of-the-art EO transponders, STAR includes the possibility of scientific data transfer thanks to the 40 Mbps downlink data-rate. This feature represents an important optimization in terms of hardware mass, which is important for EO small satellites. Furthermore, in-flight re-configurability of communication parameters via telecommand is important for in-orbit link optimization, which is especially useful for low orbit satellites where visibility can be as short as few hundreds of seconds. STAR exploits the principles of digital radio to minimize the analog section of the transceiver. 70MHz intermediate frequency (IF) is the interface with an external S/X band radio-frequency front-end. The system is composed of a dedicated configurable high-speed digital signal processing part, the Signal Processor (SP), described in technology-independent VHDL working with a clock frequency of 184.32MHz and a low speed control part, the Control Processor (CP), based on the 32-bit Gaisler LEON3 processor clocked at 32 MHz, with SpaceWire and CAN interfaces. The quantization parameters were fine-tailored to reach a trade-off between hardware complexity and implementation loss which is less than 0.5 dB at BER = 10-5 for the RX chain. The IF ports require 8-bit precision. The system prototype is fitted on the Xilinx Virtex 6 VLX75T-FF484 FPGA of which a space-qualified version has been announced. The total device occupation is 82 %.
ALPIDE: the Monolithic Active Pixel Sensor for the ALICE ITS upgrade
NASA Astrophysics Data System (ADS)
Šuljić, M.
2016-11-01
The upgrade of the ALICE vertex detector, the Inner Tracking System (ITS), is scheduled to be installed during the next long shutdown period (2019-2020) of the CERN Large Hadron Collider (LHC) . The current ITS will be replaced by seven concentric layers of Monolithic Active Pixel Sensors (MAPS) with total active surface of ~10 m2, thus making ALICE the first LHC experiment implementing MAPS detector technology on a large scale. The ALPIDE chip, based on TowerJazz 180 nm CMOS Imaging Process, is being developed for this purpose. A particular process feature, the deep p-well, is exploited so the full CMOS logic can be implemented over the active sensor area without impinging on the deposited charge collection. ALPIDE is implemented on silicon wafers with a high resistivity epitaxial layer. A single chip measures 15 mm by 30 mm and contains half a million pixels distributed in 512 rows and 1024 columns. In-pixel circuitry features amplification, shaping, discrimination and multi-event buffering. The readout is hit driven i.e. only addresses of hit pixels are sent to the periphery. The upgrade of the ITS presents two different sets of requirements for sensors of the inner and of the outer layers due to the significantly different track density, radiation level and active detector surface. The ALPIDE chip fulfils the stringent requirements in both cases. The detection efficiency is higher than 99%, fake-hit probability is orders of magnitude lower than the required 10-6 and spatial resolution within the required 5 μm. This performance is to be maintained even after a total ionising does (TID) of 2.7 Mrad and a non-ionising energy loss (NIEL) fluence of 1.7 × 1013 1 MeV neq/cm2, which is above what is expected during the detector lifetime. Readout rate of 100 kHz is provided and the power density of ALPIDE is less than 40 mW/cm2. This contribution will provide a summary of the ALPIDE features and main test results.
Climate balance of biogas upgrading systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pertl, A., E-mail: andreas.pertl@boku.ac.a; Mostbauer, P.; Obersteiner, G.
2010-01-15
One of the numerous applications of renewable energy is represented by the use of upgraded biogas where needed by feeding into the gas grid. The aim of the present study was to identify an upgrading scenario featuring minimum overall GHG emissions. The study was based on a life-cycle approach taking into account also GHG emissions resulting from plant cultivation to the process of energy conversion. For anaerobic digestion two substrates have been taken into account: (1) agricultural resources and (2) municipal organic waste. The study provides results for four different upgrading technologies including the BABIU (Bottom Ash for Biogas Upgrading)more » method. As the transport of bottom ash is a critical factor implicated in the BABIU-method, different transport distances and means of conveyance (lorry, train) have been considered. Furthermore, aspects including biogas compression and energy conversion in a combined heat and power plant were assessed. GHG emissions from a conventional energy supply system (natural gas) have been estimated as reference scenario. The main findings obtained underlined how the overall reduction of GHG emissions may be rather limited, for example for an agricultural context in which PSA-scenarios emit only 10% less greenhouse gases than the reference scenario. The BABIU-method constitutes an efficient upgrading method capable of attaining a high reduction of GHG emission by sequestration of CO{sub 2}.« less
Parallel evolution of image processing tools for multispectral imagery
NASA Astrophysics Data System (ADS)
Harvey, Neal R.; Brumby, Steven P.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Szymanski, John J.; Bloch, Jeffrey J.
2000-11-01
We describe the implementation and performance of a parallel, hybrid evolutionary-algorithm-based system, which optimizes image processing tools for feature-finding tasks in multi-spectral imagery (MSI) data sets. Our system uses an integrated spatio-spectral approach and is capable of combining suitably-registered data from different sensors. We investigate the speed-up obtained by parallelization of the evolutionary process via multiple processors (a workstation cluster) and develop a model for prediction of run-times for different numbers of processors. We demonstrate our system on Landsat Thematic Mapper MSI , covering the recent Cerro Grande fire at Los Alamos, NM, USA.
Satellite tracking and Earth dynamics research programs
NASA Technical Reports Server (NTRS)
Pearlman, M. R.
1984-01-01
Following an upgrading program, ranging performance capabilities of a satellite-tracking pulsed laser system were assessed in terms of range accuracy, range noise, data yield, and reliability. With a shorter laser pulse duration (2.5 to 3.0 NSEC) and a new analog pulse processing system, the systematic range errors were reduced to 3 to 5 cm and range noise was reduced to 5 to 16 cm and range noise was reduced to 5 to 15 cm on Starlette and BE-C, and 10 to 18 cm on LAGEOS. Maximum pulse repetition rate was increased to 30 pulses per minute and significant improvement was made in signal to noise ratio by installing a 3 A interference filter and by reducing the range gate window to 200 to 400 nsec. The solution to a problem involving leakage of a fraction of the laser oscillator pulse through the pulse chopper was outlined.
A review of advances in pixel detectors for experiments with high rate and radiation
NASA Astrophysics Data System (ADS)
Garcia-Sciveres, Maurice; Wermes, Norbert
2018-06-01
The large Hadron collider (LHC) experiments ATLAS and CMS have established hybrid pixel detectors as the instrument of choice for particle tracking and vertexing in high rate and radiation environments, as they operate close to the LHC interaction points. With the high luminosity-LHC upgrade now in sight, for which the tracking detectors will be completely replaced, new generations of pixel detectors are being devised. They have to address enormous challenges in terms of data throughput and radiation levels, ionizing and non-ionizing, that harm the sensing and readout parts of pixel detectors alike. Advances in microelectronics and microprocessing technologies now enable large scale detector designs with unprecedented performance in measurement precision (space and time), radiation hard sensors and readout chips, hybridization techniques, lightweight supports, and fully monolithic approaches to meet these challenges. This paper reviews the world-wide effort on these developments.
Power Balance and Impurity Studies in TCS
NASA Astrophysics Data System (ADS)
Grossnickle, J. A.; Pietrzyk, Z. A.; Vlases, G. C.
2003-10-01
A "zero-dimension" power balance model was developed based on measurements of absorbed power, radiated power, absolute D_α, temperature, and density for the TCS device. Radiation was determined to be the dominant source of power loss for medium to high density plasmas. The total radiated power was strongly correlated with the Oxygen line radiation. This suggests Oxygen is the dominant radiating species, which was confirmed by doping studies. These also extrapolate to a Carbon content below 1.5%. Determining the source of the impurities is an important question that must be answered for the TCS upgrade. Preliminary indications are that the primary sources of Oxygen are the stainless steel end cones. A Ti gettering system is being installed to reduce this Oxygen source. A field line code has been developed for use in tracking where open field lines terminate on the walls. Output from this code is also used to generate grids for an impurity tracking code.
Satellite-tracking and Earth dynamics research programs
NASA Technical Reports Server (NTRS)
1983-01-01
The Arequipa station obtained a total of 31,989 quick-look range observations on 719 passes in the six months. Data were acquired from Metsahovi, San Fernando, Kootwijk, Wettzell, Grasse, Simosato, Graz, Dodaira and Herstmonceux. Work progressed on the setup of SAO 1. Discussions were also initiated with the Israelis on the relocation of SAO-3 to a site in southern Israel in FY-1984. Arequipa and the cooperating stations continued to track LAGEOS at highest priority for polar motion and Earth rotation studies, and for other geophysical investigations, including crustal dynamics, earth and ocean tides, and the general development of precision orbit determination. SAO completed the revisions to its field software as a part of its recent upgrading program. With cesium standards Omega receivers, and other timekeeping aids, the station was able to maintain a timing accuracy of better than plus or minus 6 to 8 microseconds.
Issues Associated with a Hypersonic Maglev Sled
NASA Technical Reports Server (NTRS)
Haney, Joseph W.; Lenzo, J.
1996-01-01
Magnetic levitation has been explored for application from motors to transportation. All of these applications have been at velocities where the physics of the air or operating fluids are fairly well known. Application of Maglev to hypersonic velocities (Mach greater than 5) presents many opportunities, but also issues that require understanding and resolution. Use of Maglev to upgrade the High Speed Test Track at Holloman Air Force Base in Alamogordo New Mexico is an actual hypersonic application that provides the opportunity to improve test capabilities. However, there are several design issues that require investigation. This paper presents an overview of the application of Maglev to the test track and the issues associated with developing a hypersonic Maglev sled. The focus of this paper is to address the issues with the Maglev sled design, rather than the issues with the development of superconducting magnets of the sled system.
The large-area hybrid-optics RICH detector for the CLAS12 spectrometer
Mirazita, M.; Angelini, G.; Balossino, I.; ...
2017-01-16
A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Lab to study the 3D nucleon structure in the yet poorly explored valence region by deep-inelastic scattering, and to perform precision measurements in hadronization and hadron spectroscopy. The adopted solution foresees a novel hybrid optics design based on an aerogel radiator, composite mirrors and densely packed and highly segmented photon detectors. Cherenkov light will either be imaged directly (forwardmore » tracks) or after two mirror reflections (large angle tracks). Finally, the preliminary results of individual detector component tests and of the prototype performance at test-beams are reported here.« less
The design of a fast Level 1 Track trigger for the ATLAS High Luminosity Upgrade
NASA Astrophysics Data System (ADS)
Miller Allbrooke, Benedict Marc; ATLAS Collaboration
2017-10-01
The ATLAS experiment at the high-luminosity LHC will face a five-fold increase in the number of interactions per collision relative to the ongoing Run 2. This will require a proportional improvement in rejection power at the earliest levels of the detector trigger system, while preserving good signal efficiency, due to the increase in the likelihood of individual trigger thresholds being passed as a result of pile-up related activity. One critical aspect of this improvement will be the implementation of precise track reconstruction, through which sharper turn-on curves, b-tagging and tau-tagging techniques can in principle be implemented. The challenge of such a project comes in the development of a fast, precise custom electronic device integrated in the hardware-based first trigger level of the experiment, with repercussions propagating as far as the detector read-out philosophy.
Jin, Byung-Soo; Kang, Seok-Hyun; Kim, Duk-Yoon; Oh, Hoon-Gyu; Kim, Chun-Il; Moon, Gi-Hak; Kwon, Tae-Gyun; Park, Jae-Shin
2015-09-01
To evaluate prospectively the role of prostate-specific antigen (PSA) density in predicting Gleason score upgrading in prostate cancer patients eligible for active surveillance (T1/T2, biopsy Gleason score≤6, PSA≤10 ng/mL, and ≤2 positive biopsy cores). Between January 2010 and November 2013, among patients who underwent greater than 10-core transrectal ultrasound-guided biopsy, 60 patients eligible for active surveillance underwent radical prostatectomy. By use of the modified Gleason criteria, the tumor grade of the surgical specimens was examined and compared with the biopsy results. Tumor upgrading occurred in 24 patients (40.0%). Extracapsular disease and positive surgical margins were found in 6 patients (10.0%) and 8 patients (17.30%), respectively. A statistically significant correlation between PSA density and postoperative upgrading was found (p=0.030); this was in contrast with the other studied parameters, which failed to reach significance, including PSA, prostate volume, number of biopsy cores, and number of positive cores. Tumor upgrading was also highly associated with extracapsular cancer extension (p=0.000). The estimated optimal cutoff value of PSA density was 0.13 ng/mL(2), obtained by receiver operating characteristic analysis (area under the curve=0.66; p=0.020; 95% confidence interval, 0.53-0.78). PSA density is a strong predictor of Gleason score upgrading after radical prostatectomy in patients eligible for active surveillance. Because tumor upgrading increases the potential for postoperative pathological adverse findings and prognosis, PSA density should be considered when treating and consulting patients eligible for active surveillance.
Development of the FPI+ as facility science instrument for SOFIA cycle four observations
NASA Astrophysics Data System (ADS)
Pfüller, Enrico; Wiedemann, Manuel; Wolf, Jürgen; Krabbe, Alfred
2016-08-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a heavily modified Boeing 747SP aircraft, accommodating a 2.5m infrared telescope. This airborne observation platform takes astronomers to flight altitudes of up to 13.7 km (45,000ft) and therefore allows an unobstructed view of the infrared universe at wavelengths between 0.3 m and 1600 m. SOFIA is currently completing its fourth cycle of observations and utilizes eight different imaging and spectroscopic science instruments. New instruments for SOFIAs cycle 4 observations are the High-resolution Airborne Wideband Camera-plus (HAWC+) and the Focal Plane Imager (FPI+). The latter is an integral part of the telescope assembly and is used on every SOFIA flight to ensure precise tracking on the desired targets. The FPI+ is used as a visual-light photometer in its role as facility science instrument. Since the upgrade of the FPI camera and electronics in 2013, it uses a thermo-electrically cooled science grade EM-CCD sensor inside a commercial-off-the-shelf Andor camera. The back-illuminated sensor has a peak quantum efficiency of 95% and the dark current is as low as 0.01 e-/pix/sec. With this new hardware the telescope has successfully tracked on 16th magnitude stars and thus the sky coverage, e.g. the area of sky that has suitable tracking stars, has increased to 99%. Before its use as an integrated tracking imager, the same type of camera has been used as a standalone diagnostic tool to analyze the telescope pointing stability at frequencies up to 200 Hz (imaging with 400 fps). These measurements help to improve the telescope pointing control algorithms and therefore reduce the image jitter in the focal plane. Science instruments benefit from this improvement with smaller image sizes for longer exposure times. The FPI has also been used to support astronomical observations like stellar occultations by the dwarf planet Pluto and a number of exoplanet transits. Especially the observation of the occultation events benefits from the high camera sensitivity, fast readout capability and the low read noise and it was possible to achieve high time resolution on the photometric light curves. This paper will give an overview of the development from the standalone diagnostic camera to the upgraded guiding/tracking camera, fully integrated into the telescope, while still offering the diagnostic capabilities and finally to the use as a facility science instrument on SOFIA.