Parallel-Batch Scheduling and Transportation Coordination with Waiting Time Constraint
Gong, Hua; Chen, Daheng; Xu, Ke
2014-01-01
This paper addresses a parallel-batch scheduling problem that incorporates transportation of raw materials or semifinished products before processing with waiting time constraint. The orders located at the different suppliers are transported by some vehicles to a manufacturing facility for further processing. One vehicle can load only one order in one shipment. Each order arriving at the facility must be processed in the limited waiting time. The orders are processed in batches on a parallel-batch machine, where a batch contains several orders and the processing time of the batch is the largest processing time of the orders in it. The goal is to find a schedule to minimize the sum of the total flow time and the production cost. We prove that the general problem is NP-hard in the strong sense. We also demonstrate that the problem with equal processing times on the machine is NP-hard. Furthermore, a dynamic programming algorithm in pseudopolynomial time is provided to prove its ordinarily NP-hardness. An optimal algorithm in polynomial time is presented to solve a special case with equal processing times and equal transportation times for each order. PMID:24883385
Equal Protection and Due Process: Contrasting Methods of Review under Fourteenth Amendment Doctrine.
ERIC Educational Resources Information Center
Hughes, James A.
1979-01-01
Argues that the Court has, at times, confused equal protection and due process methods of review, primarily by employing interest balancing in certain equal protection cases that should have been subjected to due process analysis. Available from Harvard Civil Rights-Civil Liberties Law Review, Harvard Law School, Cambridge, MA 02138; sc $4.00.…
Zhao, Haiquan; Zeng, Xiangping; Zhang, Jiashu; Liu, Yangguang; Wang, Xiaomin; Li, Tianrui
2011-01-01
To eliminate nonlinear channel distortion in chaotic communication systems, a novel joint-processing adaptive nonlinear equalizer based on a pipelined recurrent neural network (JPRNN) is proposed, using a modified real-time recurrent learning (RTRL) algorithm. Furthermore, an adaptive amplitude RTRL algorithm is adopted to overcome the deteriorating effect introduced by the nesting process. Computer simulations illustrate that the proposed equalizer outperforms the pipelined recurrent neural network (PRNN) and recurrent neural network (RNN) equalizers. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Karwi, Abbas Ali Mahmmod
2018-04-01
Laser has many attractive specifications which made it adaptable for material processing. Laser has been taken as a modern heat treatment source to prevent the formation of non-protective oxide layer with intensity equals to (1.31×105 w/cm2), lasing time equals to (300 µs), wave length equals to (1.063 µm), and the spot radius equals to (125 µm). Lithium is depleted through the conventional heat treatment processes. The main factors affected on lithium depletion are temperature and time. Lithium kept as a solid solution at casting method. Micro hardness of the affected zone reaches to acceptable values for various ageing times and hardening depths. The main conventional heat treatment processes are; homogenization, solution heat treatment, and ageing. Alloys prepared with the specific amounts of lithium concentration (2-2.5%). Oxides with different shapes are formed. Temperature distribution, heating, and cooling rates used externally and internally to see the effect of pulse generation by laser on bulk body.
Mallon, Richard G.
1984-01-01
Method and apparatus for narrowing the distribution of residence times of any size particle and equalizing the residence times of large and small particles in fluidized beds. Particles are moved up one fluidized column and down a second fluidized column with the relative heights selected to equalize residence times of large and small particles. Additional pairs of columns are staged to narrow the distribution of residence times and provide complete processing of the material.
Lp-estimates on diffusion processes
NASA Astrophysics Data System (ADS)
Yan, Litan; Zhu, Bei
2005-03-01
Let be a diffusion process on given by where B=(Bt)t[greater-or-equal, slanted]0 is a standard Brownian motion starting at zero and [mu],[sigma] are two continuous functions on , and [sigma](x)>0 if x[not equal to]0. For a nonnegative continuous function [phi] we define the functional by , t[greater-or-equal, slanted]0. Then under suitable conditions we establish the relationship between Lp-norm of sup0[less-than-or-equals, slant]t[less-than-or-equals, slant][tau]Xt and Lp-norm of J[tau] for all stopping times [tau]. In particular, for a Bessel process Z of dimension [delta]>0 starting at zero, we show that the inequalities hold for all 0
0, where Cp and cp are some positive constants depending only on p, and H[mu],h[mu] are the inverses of x|->(e2[mu]x-2[mu]x-1)/2[mu]2 and x|->(e-2[mu]x+2[mu]x-1)/2[mu]2 on (0,[infinity]), respectively.
Equalizer design techniques for dispersive cables with application to the SPS wideband kicker
NASA Astrophysics Data System (ADS)
Platt, Jason; Hofle, Wolfgang; Pollock, Kristin; Fox, John
2017-10-01
A wide-band vertical instability feedback control system in development at CERN requires 1-1.5 GHz of bandwidth for the entire processing chain, from the beam pickups through the feedback signal digital processing to the back-end power amplifiers and kicker structures. Dispersive effects in cables, amplifiers, pickup and kicker elements can result in distortions in the time domain signal as it proceeds through the processing system, and deviations from linear phase response reduce the allowable bandwidth for the closed-loop feedback system. We have developed an equalizer analog circuit that compensates for these dispersive effects. Here we present a design technique for the construction of an analog equalizer that incorporates the effect of parasitic circuit elements in the equalizer to increase the fidelity of the implemented equalizer. Finally, we show results from the measurement of an assembled backend equalizer that corrects for dispersive elements in the cables over a bandwidth of 10-1000 MHz.
Ioannone, F; Di Mattia, C D; De Gregorio, M; Sergi, M; Serafini, M; Sacchetti, G
2015-05-01
The effect of roasting on the content of flavanols and proanthocyanidins and on the antioxidant activity of cocoa beans was investigated. Cocoa beans were roasted at three temperatures (125, 135 and 145 °C), for different times, to reach moisture contents of about 2 g 100 g(-1). Flavanols and proanthocyanidins were determined, and the antioxidant activity was tested by total phenolic index (TPI), ferric reducing antioxidant power (FRAP) and total radical trapping antioxidant parameter (TRAP) methods. The rates of flavanol and total proanthocyanidin loss increased with roasting temperatures. Moisture content of the roasted beans being equal, high temperature-short time processes minimised proanthocyanidins loss. Moisture content being equal, the average roasting temperature (135 °C) determined the highest TPI and FRAP values and the highest temperature (145 °C) determined the lowest TPI values. Moisture content being equal, low temperature-long time roasting processes maximised the chain-breaking activity, as determined by the TRAP method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Jarzynski equality: connections to thermodynamics and the second law.
Palmieri, Benoit; Ronis, David
2007-01-01
The one-dimensional expanding ideal gas model is used to compute the exact nonequilibrium distribution function. The state of the system during the expansion is defined in terms of local thermodynamics quantities. The final equilibrium free energy, obtained a long time after the expansion, is compared against the free energy that appears in the Jarzynski equality. Within this model, where the Jarzynski equality holds rigorously, the free energy change that appears in the equality does not equal the actual free energy change of the system at any time of the process. More generally, the work bound that is obtained from the Jarzynski equality is an upper bound to the upper bound that is obtained from the first and second laws of thermodynamics. The cancellation of the dissipative (nonequilibrium) terms that result in the Jarzynski equality is shown in the framework of response theory. This is used to show that the intuitive assumption that the Jarzynski work bound becomes equal to the average work done when the system evolves quasistatically is incorrect under some conditions.
Zhou, Xian; Chen, Xue
2011-05-09
The digital coherent receivers combine coherent detection with digital signal processing (DSP) to compensate for transmission impairments, and therefore are a promising candidate for future high-speed optical transmission system. However, the maximum symbol rate supported by such real-time receivers is limited by the processing rate of hardware. In order to cope with this difficulty, the parallel processing algorithms is imperative. In this paper, we propose a novel parallel digital timing recovery loop (PDTRL) based on our previous work. Furthermore, for increasing the dynamic dispersion tolerance range of receivers, we embed a parallel adaptive equalizer in the PDTRL. This parallel joint scheme (PJS) can be used to complete synchronization, equalization and polarization de-multiplexing simultaneously. Finally, we demonstrate that PDTRL and PJS allow the hardware to process 112 Gbit/s POLMUX-DQPSK signal at the hundreds MHz range. © 2011 Optical Society of America
Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun
2017-11-10
In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.
Equalizing Si photodetectors fabricated in standard CMOS processes
NASA Astrophysics Data System (ADS)
Guerrero, E.; Aguirre, J.; Sánchez-Azqueta, C.; Royo, G.; Gimeno, C.; Celma, S.
2017-05-01
This work presents a new continuous-time equalization approach to overcome the limited bandwidth of integrated CMOS photodetectors. It is based on a split-path topology that features completely decoupled controls for boosting and gain; this capability allows a better tuning of the equalizer in comparison with other architectures based on the degenerated differential pair, which is particularly helpful to achieve a proper calibration of the system. The equalizer is intended to enhance the bandwidth of CMOS standard n-well/p-bulk differential photodiodes (DPDs), which falls below 10MHz representing a bottleneck in fully integrated optoelectronic interfaces to fulfill the low-cost requirements of modern smart sensors. The proposed equalizer has been simulated in a 65nm CMOS process and biased with a single supply voltage of 1V, where the bandwidth of the DPD has been increased up to 3 GHz.
Time Reversal Acoustic Communication Using Filtered Multitone Modulation
Sun, Lin; Chen, Baowei; Li, Haisen; Zhou, Tian; Li, Ruo
2015-01-01
The multipath spread in underwater acoustic channels is severe and, therefore, when the symbol rate of the time reversal (TR) acoustic communication using single-carrier (SC) modulation is high, the large intersymbol interference (ISI) span caused by multipath reduces the performance of the TR process and needs to be removed using the long adaptive equalizer as the post-processor. In this paper, a TR acoustic communication method using filtered multitone (FMT) modulation is proposed in order to reduce the residual ISI in the processed signal using TR. In the proposed method, FMT modulation is exploited to modulate information symbols onto separate subcarriers with high spectral containment and TR technique, as well as adaptive equalization is adopted at the receiver to suppress ISI and noise. The performance of the proposed method is assessed through simulation and real data from a trial in an experimental pool. The proposed method was compared with the TR acoustic communication using SC modulation with the same spectral efficiency. Results demonstrate that the proposed method can improve the performance of the TR process and reduce the computational complexity of adaptive equalization for post-process. PMID:26393586
Time Reversal Acoustic Communication Using Filtered Multitone Modulation.
Sun, Lin; Chen, Baowei; Li, Haisen; Zhou, Tian; Li, Ruo
2015-09-17
The multipath spread in underwater acoustic channels is severe and, therefore, when the symbol rate of the time reversal (TR) acoustic communication using single-carrier (SC) modulation is high, the large intersymbol interference (ISI) span caused by multipath reduces the performance of the TR process and needs to be removed using the long adaptive equalizer as the post-processor. In this paper, a TR acoustic communication method using filtered multitone (FMT) modulation is proposed in order to reduce the residual ISI in the processed signal using TR. In the proposed method, FMT modulation is exploited to modulate information symbols onto separate subcarriers with high spectral containment and TR technique, as well as adaptive equalization is adopted at the receiver to suppress ISI and noise. The performance of the proposed method is assessed through simulation and real data from a trial in an experimental pool. The proposed method was compared with the TR acoustic communication using SC modulation with the same spectral efficiency. Results demonstrate that the proposed method can improve the performance of the TR process and reduce the computational complexity of adaptive equalization for post-process.
Moving beyond gender: processes that create relationship equality.
Knudson-Martin, Carmen; Mahoney, Anne Rankin
2005-04-01
Equality is related to relationship success, yet few couples achieve it. In this qualitative study, we examine how couples with children in two time cohorts (1982 and 2001) moved toward equality. The analysis identifies three types of couples: Postgender, gender legacy, and traditional. Movement toward equality is facilitated by: (a) Stimulus for change, including awareness of gender, commitment to family and work, and situational pressures; and (b) patterns that promote change, including active negotiation, challenges to gender entitlement, development of new competencies, and mutual attention to relationship and family tasks. Implications for practice are discussed.
Mallon, R.G.
1983-05-13
The invention relates to oil shale retorting and more particularly to staged fluidized bed oil shale retorting. Method and apparatus are disclosed for narrowing the distribution of residence times of any size particle and equalizing the residence times of large and small particles in fluidized beds. Particles are moved up one fluidized column and down a second fluidized column with the relative heights selected to equalize residence times of large and small particles. Additional pairs of columns are staged to narrow the distribution of residence times and provide complete processing of the material.
Preisig, James C
2005-07-01
Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.
Millimeter Wave Alternate Route Study.
1981-04-01
processing gains are based upon the assumption that the jammer equally distributes his available power over all the hopping frequencies. If this is true...Examples Assumptions 0 25 GHz hopping range (e.g., 20 GHz to 45 GHz) 0 10 ms settling time * 0.1 second dwell time - implies 11% increase in channel data...of the architectures presented previously. The assumption that each link has equal probability p of being disrupted (i.e., successfully jammed) seems
Gravitational wave searches with pulsar timing arrays: Cancellation of clock and ephemeris noises
NASA Astrophysics Data System (ADS)
Tinto, Massimo
2018-04-01
We propose a data processing technique to cancel monopole and dipole noise sources (such as clock and ephemeris noises, respectively) in pulsar timing array searches for gravitational radiation. These noises are the dominant sources of correlated timing fluctuations in the lower-part (≈10-9-10-8 Hz ) of the gravitational wave band accessible by pulsar timing experiments. After deriving the expressions that reconstruct these noises from the timing data, we estimate the gravitational wave sensitivity of our proposed processing technique to single-source signals to be at least one order of magnitude higher than that achievable by directly processing the timing data from an equal-size array. Since arrays can generate pairs of clock and ephemeris-free timing combinations that are no longer affected by correlated noises, we implement with them the cross-correlation statistic to search for an isotropic stochastic gravitational wave background. We find the resulting optimal signal-to-noise ratio to be more than one order of magnitude larger than that obtainable by correlating pairs of timing data from arrays of equal size.
NASA Astrophysics Data System (ADS)
Giuliani, Maximiliano; Dutcher, John
2013-03-01
A key step in the life of a bacterium is its division into two daughter cells of equal size. This process is carefully controlled and regulated so that equal partitioning of the cellular machinery is obtained. In E. coli, this regulation is accomplished, in part, by the Min protein system. The Min proteins undergo an oscillation between the poles of rod-shaped E. coli bacteria. We use high magnification, time-resolved total internal reflection fluorescence microscopy to characterize the temporal distributions of different processes within the oscillation: the MinD-MinE interaction time, the residence time for membrane bound MinD, and the recruitment time for MinD to be observed at the opposite pole. We also characterize the change in each of these processes in the presence of the antimicrobial compound polymyxin B (PMB). We show that the times corresponding to the removal of MinD from one pole and the recruitment of MinD at the opposite pole are correlated. We explain this correlation through the existence of a concentration threshold. The effect of PMB on the concentration threshold is used to identify which process within the oscillation is most affected.
Das, Moupriya
2014-12-01
The states of an overdamped Brownian particle confined in a two-dimensional bilobal enclosure are considered to correspond to two binary values: 0 (left lobe) and 1 (right lobe). An ensemble of such particles represents bits of entropic information. An external bias is applied on the particles, equally distributed in two lobes, to drive them to a particular lobe erasing one kind of bit of information. It has been shown that the average work done for the entropic memory erasure process approaches the Landauer bound for a very slow erasure cycle. Furthermore, the detailed Jarzynski equality holds to a very good extent for the erasure protocol, so that the Landauer bound may be calculated irrespective of the time period of the erasure cycle in terms of the effective free-energy change for the process. The detailed Jarzynski equality applied to two subprocesses, namely the transition from entropic memory state 0 to state 1 and the transition from entropic memory state 1 to state 1, connects the work done on the system to the probability to occupy the two states under a time-reversed process. In the entire treatment, the work appears as a boundary effect of the physical confinement of the system not having a conventional potential energy barrier. Finally, an analytical derivation of the detailed and classical Jarzynski equality for Brownian movement in confined space with varying width has been proposed. Our analytical scheme supports the numerical simulations presented in this paper.
Niles, Nathaniel W; Conley, Sheila M; Yang, Rayson C; Vanichakarn, Pantila; Anderson, Tamara A; Butterly, John R; Robb, John F; Jayne, John E; Yanofsky, Norman N; Proehl, Jean A; Guadagni, Donald F; Brown, Jeremiah R
2010-01-01
Rural ST-segment elevation myocardial infarction (STEMI) care networks may be particularly disadvantaged in achieving a door-to-balloon time (D2B) of less than or equal to 90 minutes recommended in current guidelines. ST-ELEVATION MYOCARDIAL INFARCTION PROCESS UPGRADE PROJECT: A multidisciplinary STEMI process upgrade group at a rural percutaneous coronary intervention center implemented evidence-based strategies to reduce time to electrocardiogram (ECG) and D2B, including catheterization laboratory activation triggered by either a prehospital ECG demonstrating STEMI or an emergency department physician diagnosing STEMI, single-call catheterization laboratory activation, catheterization laboratory response time less than or equal to 30 minutes, and prompt data feedback. An ongoing regional STEMI registry was used to collect process time intervals, including time to ECG and D2B, in a consecutive series of STEMI patients presenting before (group 1) and after (group 2) strategy implementation. Significant reductions in time to first ECG in the emergency department and D2B were seen in group 2 compared with group 1. Important improvement in the process of acute STEMI patient care was accomplished in the rural percutaneous coronary intervention center setting by implementing evidence-based strategies. Copyright © 2010 Elsevier Inc. All rights reserved.
Detection and Imaging of Moving Targets with LiMIT SAR Data
2017-03-03
include space time adaptive processing (STAP) or displaced phase center antenna (DPCA) [4]–[7]. Page et al. combined constant acceleration target...motion focusing with space-time adaptive processing (STAP), and included the refocusing parameters in the STAP steering vector. Due to inhomogenous...wavelength λ and slow time t, of a moving target after matched filter and passband equalization processing can be expressed as: P (t) = exp ( −j 4π λ ||~rp
DSP+FPGA-based real-time histogram equalization system of infrared image
NASA Astrophysics Data System (ADS)
Gu, Dongsheng; Yang, Nansheng; Pi, Defu; Hua, Min; Shen, Xiaoyan; Zhang, Ruolan
2001-10-01
Histogram Modification is a simple but effective method to enhance an infrared image. There are several methods to equalize an infrared image's histogram due to the different characteristics of the different infrared images, such as the traditional HE (Histogram Equalization) method, and the improved HP (Histogram Projection) and PE (Plateau Equalization) method and so on. If to realize these methods in a single system, the system must have a mass of memory and extremely fast speed. In our system, we introduce a DSP + FPGA based real-time procession technology to do these things together. FPGA is used to realize the common part of these methods while DSP is to do the different part. The choice of methods and the parameter can be input by a keyboard or a computer. By this means, the function of the system is powerful while it is easy to operate and maintain. In this article, we give out the diagram of the system and the soft flow chart of the methods. And at the end of it, we give out the infrared image and its histogram before and after the process of HE method.
Extraction of Qualitative Features from Sensor Data Using Windowed Fourier Transform
NASA Technical Reports Server (NTRS)
Amini, Abolfazl M.; Figueroa, Fenando
2003-01-01
In this paper, we use Matlab to model the health monitoring of a system through the information gathered from sensors. This implies assessment of the condition of the system components. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of an element, a qualitative change, or a change due to a problem with another element in the network. For example, if one sensor indicates that the temperature in the tank has experienced a step change then a pressure sensor associated with the process in the tank should also experience a step change. The step up and step down as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system is allowed to run for a period equal to three time constant of the main process before changes occur. Then each point of the signal is selected with a trailing data collected previously. Two trailing lengths of data are selected, one equal to two time constants of the main process and the other equal to two time constants of the sensor disturbance. Next, the DC is removed from each set of data and then the data are passed through a window followed by calculation of spectra for each set. In order to extract features the signal power, peak, and spectrum are plotted vs time. The results indicate distinct shapes corresponding to each process. The study is also carried out for a number of Gaussian distributed noisy cases.
High pressure processing's potential to inactivate norovirus and other fooodborne viruses
USDA-ARS?s Scientific Manuscript database
High pressure processing (HPP) can inactivate human norovirus. However, all viruses are not equally susceptible to HPP. Pressure treatment parameters such as required pressure levels, initial pressurization temperatures, and pressurization times substantially affect inactivation. How food matrix ...
Digital equalization of time-delay array receivers on coherent laser communications.
Belmonte, Aniceto
2017-01-15
Field conjugation arrays use adaptive combining techniques on multi-aperture receivers to improve the performance of coherent laser communication links by mitigating the consequences of atmospheric turbulence on the down-converted coherent power. However, this motivates the use of complex receivers as optical signals collected by different apertures need to be adaptively processed, co-phased, and scaled before they are combined. Here, we show that multiple apertures, coupled with optical delay lines, combine retarded versions of a signal at a single coherent receiver, which uses digital equalization to obtain diversity gain against atmospheric fading. We found in our analysis that, instead of field conjugation arrays, digital equalization of time-delay multi-aperture receivers is a simpler and more versatile approach to accomplish reduction of atmospheric fading.
Adaptive reconfigurable V-BLAST type equalizer for cognitive MIMO-OFDM radios
NASA Astrophysics Data System (ADS)
Ozden, Mehmet Tahir
2015-12-01
An adaptive channel shortening equalizer design for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) radio receivers is considered in this presentation. The proposed receiver has desirable features for cognitive and software defined radio implementations. It consists of two sections: MIMO decision feedback equalizer (MIMO-DFE) and adaptive multiple Viterbi detection. In MIMO-DFE section, a complete modified Gram-Schmidt orthogonalization of multichannel input data is accomplished using sequential processing multichannel Givens lattice stages, so that a Vertical Bell Laboratories Layered Space Time (V-BLAST) type MIMO-DFE is realized at the front-end section of the channel shortening equalizer. Matrix operations, a major bottleneck for receiver operations, are accordingly avoided, and only scalar operations are used. A highly modular and regular radio receiver architecture that has a suitable structure for digital signal processing (DSP) chip and field programable gate array (FPGA) implementations, which are important for software defined radio realizations, is achieved. The MIMO-DFE section of the proposed receiver can also be reconfigured for spectrum sensing and positioning functions, which are important tasks for cognitive radio applications. In connection with adaptive multiple Viterbi detection section, a systolic array implementation for each channel is performed so that a receiver architecture with high computational concurrency is attained. The total computational complexity is given in terms of equalizer and desired response filter lengths, alphabet size, and number of antennas. The performance of the proposed receiver is presented for two-channel case by means of mean squared error (MSE) and probability of error evaluations, which are conducted for time-invariant and time-variant channel conditions, orthogonal and nonorthogonal transmissions, and two different modulation schemes.
Estimation of lean and fat composition of pork ham using image processing measurements
NASA Astrophysics Data System (ADS)
Jia, Jiancheng; Schinckel, Allan P.; Forrest, John C.
1995-01-01
This paper presents a method of estimating the lean and fat composition in pork ham from cross-sectional area measurements using image processing technology. The relationship between the quantity of ham lean and fat mass with the ham lean and fat areas was studied. The prediction equations for pork ham composition based on the ham cross-sectional area measurements were developed. The results show that ham lean weight was related to the ham lean area (r equals .75, P < .0001) while ham fat weight was related tot the ham fat area (r equals .79, P equals .0001). Ham lean weight was highly related to the product of ham total weight times percentage ham lean area (r equals .96, P < .0001). Ham fat weight was highly related to the product of ham total weight times percentage ham fat area (r equals .88, P < .0001). The best combination of independent variables for estimating ham lean weight was trimmed wholesale ham weight and percentage ham fat area with a coefficient of determination of 92%. The best combination of independent variables for estimating ham fat weight was trimmed wholesale ham weight and percentage ham fat area with a coefficient of determination of 78%. Prediction equations with either two or three independent variables did not significantly increase the accuracy of prediction. The results of this study indicate that the weight of ham lean and fat could be predicted from ham cross-sectional area measurements using image analysis in combination with wholesale ham weight.
Systematic comparisons between PRISM version 1.0.0, BAP, and CSMIP ground-motion processing
Kalkan, Erol; Stephens, Christopher
2017-02-23
A series of benchmark tests was run by comparing results of the Processing and Review Interface for Strong Motion data (PRISM) software version 1.0.0 to Basic Strong-Motion Accelerogram Processing Software (BAP; Converse and Brady, 1992), and to California Strong Motion Instrumentation Program (CSMIP) processing (Shakal and others, 2003, 2004). These tests were performed by using the MatLAB implementation of PRISM, which is equivalent to its public release version in Java language. Systematic comparisons were made in time and frequency domains of records processed in PRISM and BAP, and in CSMIP, by using a set of representative input motions with varying resolutions, frequency content, and amplitudes. Although the details of strong-motion records vary among the processing procedures, there are only minor differences among the waveforms for each component and within the frequency passband common to these procedures. A comprehensive statistical evaluation considering more than 1,800 ground-motion components demonstrates that differences in peak amplitudes of acceleration, velocity, and displacement time series obtained from PRISM and CSMIP processing are equal to or less than 4 percent for 99 percent of the data, and equal to or less than 2 percent for 96 percent of the data. Other statistical measures, including the Euclidian distance (L2 norm) and the windowed root mean square level of processed time series, also indicate that both processing schemes produce statistically similar products.
Automatic latency equalization in VHDL-implemented complex pipelined systems
NASA Astrophysics Data System (ADS)
Zabołotny, Wojciech M.
2016-09-01
In the pipelined data processing systems it is very important to ensure that parallel paths delay data by the same number of clock cycles. If that condition is not met, the processing blocks receive data not properly aligned in time and produce incorrect results. Manual equalization of latencies is a tedious and error-prone work. This paper presents an automatic method of latency equalization in systems described in VHDL. The proposed method uses simulation to measure latencies and verify introduced correction. The solution is portable between different simulation and synthesis tools. The method does not increase the complexity of the synthesized design comparing to the solution based on manual latency adjustment. The example implementation of the proposed methodology together with a simple design demonstrating its use is available as an open source project under BSD license.
A novel parallel architecture for local histogram equalization
NASA Astrophysics Data System (ADS)
Ohannessian, Mesrob I.; Choueiter, Ghinwa F.; Diab, Hassan
2005-07-01
Local histogram equalization is an image enhancement algorithm that has found wide application in the pre-processing stage of areas such as computer vision, pattern recognition and medical imaging. The computationally intensive nature of the procedure, however, is a main limitation when real time interactive applications are in question. This work explores the possibility of performing parallel local histogram equalization, using an array of special purpose elementary processors, through an HDL implementation that targets FPGA or ASIC platforms. A novel parallelization scheme is presented and the corresponding architecture is derived. The algorithm is reduced to pixel-level operations. Processing elements are assigned image blocks, to maintain a reasonable performance-cost ratio. To further simplify both processor and memory organizations, a bit-serial access scheme is used. A brief performance assessment is provided to illustrate and quantify the merit of the approach.
Dyjas, Oliver; Ulrich, Rolf
2014-01-01
In typical discrimination experiments, participants are presented with a constant standard and a variable comparison stimulus and their task is to judge which of these two stimuli is larger (comparative judgement). In these experiments, discrimination sensitivity depends on the temporal order of these stimuli (Type B effect) and is usually higher when the standard precedes rather than follows the comparison. Here, we outline how two models of stimulus discrimination can account for the Type B effect, namely the weighted difference model (or basic Sensation Weighting model) and the Internal Reference Model. For both models, the predicted psychometric functions for comparative judgements as well as for equality judgements, in which participants indicate whether they perceived the two stimuli to be equal or not equal, are derived and it is shown that the models also predict a Type B effect for equality judgements. In the empirical part, the models' predictions are evaluated. To this end, participants performed a duration discrimination task with comparative judgements and with equality judgements. In line with the models' predictions, a Type B effect was observed for both judgement types. In addition, a time-order error, as indicated by shifts of the psychometric functions, and differences in response times were observed only for the equality judgement. Since both models entail distinct additional predictions, it seems worthwhile for future research to unite the two models into one conceptual framework.
An Evaluation of the Factor Structure of the HRM Survey, Forms 9 and 11
1976-07-01
Equal Opportunity Index, Social Problems and Processes Equal ...5 2 Equal 64-67 (Form 9) Opportunity 65-68,70 (Form 11) Equal Opportunity, Social Problems and Processes Equal Opportunity, Social ... Social Problems Equal Opportunity Index, Social Problems Drug Abuse Index, Social Problems Alcholism Prevention Index, Social Problems
Broadband optical equalizer using fault tolerant digital micromirrors.
Riza, Nabeel; Mughal, M Junaid
2003-06-30
For the first time, the design and demonstration of a near continuous spectral processing mode broadband equalizer is described using the earlier proposed macro-pixel spatial approach for multiwavelength fiber-optic attenuation in combination with a high spectral resolution broadband transmissive volume Bragg grating. The demonstrated design features low loss and low polarization dependent loss with broadband operation. Such an analog mode spectral processor can impact optical applications ranging from test and instrumentation to dynamic alloptical networks.
12 CFR 1070.22 - Fees for processing requests for CFPB records.
Code of Federal Regulations, 2013 CFR
2013-01-01
... CFPB shall charge the requester for the actual direct cost of the search, including computer search time, runs, and the operator's salary. The fee for computer output will be the actual direct cost. For... and the cost of operating the computer to process a request) equals the equivalent dollar amount of...
Li, Wenjin
2018-02-28
Transition path ensemble consists of reactive trajectories and possesses all the information necessary for the understanding of the mechanism and dynamics of important condensed phase processes. However, quantitative description of the properties of the transition path ensemble is far from being established. Here, with numerical calculations on a model system, the equipartition terms defined in thermal equilibrium were for the first time estimated in the transition path ensemble. It was not surprising to observe that the energy was not equally distributed among all the coordinates. However, the energies distributed on a pair of conjugated coordinates remained equal. Higher energies were observed to be distributed on several coordinates, which are highly coupled to the reaction coordinate, while the rest were almost equally distributed. In addition, the ensemble-averaged energy on each coordinate as a function of time was also quantified. These quantitative analyses on energy distributions provided new insights into the transition path ensemble.
Hybrid acousto-optic and digital equalization for microwave digital radio channels
NASA Astrophysics Data System (ADS)
Anderson, C. S.; Vanderlugt, A.
1990-11-01
Digital radio transmission systems use complex modulation schemes that require powerful signal-processing techniques to correct channel distortions and to minimize BERs. This paper proposes combining the computation power of acoustooptic processing and the accuracy of digital processing to produce a hybrid channel equalizer that exceeds the performance of digital equalization alone. Analysis shows that a hybrid equalizer for 256-level quadrature amplitude modulation (QAM) performs better than a digital equalizer for 64-level QAM.
Quantum Jarzynski equality of measurement-based work extraction
NASA Astrophysics Data System (ADS)
Morikuni, Yohei; Tajima, Hiroyasu; Hatano, Naomichi
2017-03-01
Many studies of quantum-size heat engines assume that the dynamics of an internal system is unitary and that the extracted work is equal to the energy loss of the internal system. Both assumptions, however, should be under scrutiny. In the present paper, we analyze quantum-scale heat engines, employing the measurement-based formulation of the work extraction recently introduced by Hayashi and Tajima [M. Hayashi and H. Tajima, arXiv:1504.06150]. We first demonstrate the inappropriateness of the unitary time evolution of the internal system (namely, the first assumption above) using a simple two-level system; we show that the variance of the energy transferred to an external system diverges when the dynamics of the internal system is approximated to a unitary time evolution. Second, we derive the quantum Jarzynski equality based on the formulation of Hayashi and Tajima as a relation for the work measured by an external macroscopic apparatus. The right-hand side of the equality reduces to unity for "natural" cyclic processes but fluctuates wildly for noncyclic ones, exceeding unity often. This fluctuation should be detectable in experiments and provide evidence for the present formulation.
Quantum Jarzynski equality of measurement-based work extraction.
Morikuni, Yohei; Tajima, Hiroyasu; Hatano, Naomichi
2017-03-01
Many studies of quantum-size heat engines assume that the dynamics of an internal system is unitary and that the extracted work is equal to the energy loss of the internal system. Both assumptions, however, should be under scrutiny. In the present paper, we analyze quantum-scale heat engines, employing the measurement-based formulation of the work extraction recently introduced by Hayashi and Tajima [M. Hayashi and H. Tajima, arXiv:1504.06150]. We first demonstrate the inappropriateness of the unitary time evolution of the internal system (namely, the first assumption above) using a simple two-level system; we show that the variance of the energy transferred to an external system diverges when the dynamics of the internal system is approximated to a unitary time evolution. Second, we derive the quantum Jarzynski equality based on the formulation of Hayashi and Tajima as a relation for the work measured by an external macroscopic apparatus. The right-hand side of the equality reduces to unity for "natural" cyclic processes but fluctuates wildly for noncyclic ones, exceeding unity often. This fluctuation should be detectable in experiments and provide evidence for the present formulation.
Edge enhancement and image equalization by unsharp masking using self-adaptive photochromic filters.
Ferrari, José A; Flores, Jorge L; Perciante, César D; Frins, Erna
2009-07-01
A new method for real-time edge enhancement and image equalization using photochromic filters is presented. The reversible self-adaptive capacity of photochromic materials is used for creating an unsharp mask of the original image. This unsharp mask produces a kind of self filtering of the original image. Unlike the usual Fourier (coherent) image processing, the technique we propose can also be used with incoherent illumination. Validation experiments with Bacteriorhodopsin and photochromic glass are presented.
Lesbian Relationships--A Struggle Towards Couple Equality.
ERIC Educational Resources Information Center
Sang, Barbara E.
This paper explores some themes which emerge frequently in lesbian relationships as both members of the couple strive to realize their unique potential. Some of the areas discussed are: (1) roles and the decision making process; (2) emotional expectations; (3) time alone and time together; (4) issues that arise when both persons value their work;…
Achieving equal pay for comparable worth through arbitration.
Wisniewski, S C
1982-01-01
Traditional "women's jobs" often pay relatively low wages because of the effects of institutionalized stereotypes concerning women and their role in the work place. One way of dealing with sex discrimination that results in job segregation is to narrow the existing wage differential between "men's jobs" and "women's jobs." Where the jobs are dissimilar on their face, this narrowing of pay differences involves implementing the concept of "equal pay for jobs of comparable worth." Some time in the future, far-reaching, perhaps even industrywide, reductions in male-female pay differentials may be achieved by pursuing legal remedies based on equal pay for comparable worth. However, as the author demonstrates, immediate, albeit more limited, relief for sex-based pay inequities found in specific work places can be obtained by implementing equal pay for jobs of comparable worth through the collective bargaining and arbitration processes.
NASA Astrophysics Data System (ADS)
Lobanova, G. L.; Yurmazova, T. A.; Shiyan, L. N.; Machekhina, K. I.
2016-02-01
The present work is a part of a continuations study of the physical and chemical processes complex in natural waters containing humic-type organic substances at the influence of pulsed electrical discharges in a layer of iron pellets. The study of humic substances processing in the iron granules layer by means of pulsed electric discharge for the purpose of water purification from organic compounds humic origin from natural water of the northern regions of Russia is relevant for the water treatment technologies. In case of molar humate sodium - iron ions (II) at the ratio 2:3, reduction of solution colour and chemical oxygen demand occur due to the humate sodium ions and iron (II) participation in oxidation-reduction reactions followed by coagulation insoluble compounds formation at a pH of 6.5. In order to achieve this molar ratio and the time of pulsed electric discharge, equal to 10 seconds is experimentally identified. The role of secondary processes that occur after disconnection of the discharge is shown. The time of contact in active erosion products with sodium humate, equal to 1 hour is established. During this time, the value of permanganate oxidation and iron concentration in solution achieves the value of maximum permissible concentrations and further contact time increase does not lead to the controlled parameters change.
A Longitudinal Investigation into L2 Learners' Cognitive Processes during Study Abroad
ERIC Educational Resources Information Center
Ren, Wei
2014-01-01
The present study longitudinally investigates the cognitive processes of advanced L2 learners engaged in a multimedia task that elicited status-equal and status-unequal refusals in English during their study abroad. Data were collected three times by retrospective verbal report from 20 Chinese learners who were studying abroad over the course of…
ERIC Educational Resources Information Center
Jarvikivi, Juhani; Pyykkonen, Pirita; Niemi, Jussi
2009-01-01
The authors compared sublexical and supralexical approaches to morphological processing with unambiguous and ambiguous inflected words and words with ambiguous stems in 3 masked and unmasked priming experiments in Finnish. Experiment 1 showed equal facilitation for all prime types with a short 60-ms stimulus onset asynchrony (SOA) but significant…
Photoconductivity of Activated Carbon Fibers
DOE R&D Accomplishments Database
Kuriyama, K.; Dresselhaus, M. S.
1990-08-01
The photoconductivity is measured on a high-surface-area disordered carbon material, namely activated carbon fibers, to investigate their electronic properties. Measurements of decay time, recombination kinetics and temperature dependence of the photoconductivity generally reflect the electronic properties of a material. The material studied in this paper is a highly disordered carbon derived from a phenolic precursor, having a huge specific surface area of 1000--2000m{sup 2}/g. Our preliminary thermopower measurements suggest that this carbon material is a p-type semiconductor with an amorphous-like microstructure. The intrinsic electrical conductivity, on the order of 20S/cm at room temperature, increases with increasing temperature in the range 30--290K. In contrast with the intrinsic conductivity, the photoconductivity in vacuum decreases with increasing temperature. The recombination kinetics changes from a monomolecular process at room temperature to a biomolecular process at low temperatures. The observed decay time of the photoconductivity is {approx equal}0.3sec. The magnitude of the photoconductive signal was reduced by a factor of ten when the sample was exposed to air. The intrinsic carrier density and the activation energy for conduction are estimated to be {approx equal}10{sup 21}/cm{sup 3} and {approx equal}20meV, respectively. The majority of the induced photocarriers and of the intrinsic carriers are trapped, resulting in the long decay time of the photoconductivity and the positive temperature dependence of the conductivity.
A real-time programming system.
Townsend, H R
1979-03-01
The paper describes a Basic Operating and Scheduling System (BOSS) designed for a small computer. User programs are organised as self-contained modular 'processes' and the way in which the scheduler divides the time of the computer equally between them, while arranging for any process which has to respond to an interrupt from a peripheral device to be given the necessary priority, is described in detail. Next the procedures provided by the operating system to organise communication between processes are described, and how they are used to construct dynamically self-modifying real-time systems. Finally, the general philosophy of BOSS and applications to a multi-processor assembly are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arroyo, F.; Fernandez-Pereira, C.; Olivares, J.
2009-04-15
In this article, a hydrometallurgical method for the selective recovery of germanium from fly ash (FA) has been tested at pilot plant scale. The pilot plant flowsheet comprised a first stage of water leaching of FA, and a subsequent selective recovery of the germanium from the leachate by solvent extraction method. The solvent extraction method was based on Ge complexation with catechol in an aqueous solution followed by the extraction of the Ge-catechol complex (Ge(C{sub 6}H{sub 4}O{sub 2}){sub 3}{sup 2-}) with an extracting organic reagent (trioctylamine) diluted in an organic solvent (kerosene), followed by the subsequent stripping of the organicmore » extract. The process has been tested on a FA generated in an integrated gasification with combined cycle (IGCC) process. The paper describes the designed 5 kg/h pilot plant and the tests performed on it. Under the operational conditions tested, approximately 50% of germanium could be recovered from FA after a water extraction at room temperature. Regarding the solvent extraction method, the best operational conditions for obtaining a concentrated germanium-bearing solution practically free of impurities were as follows: extraction time equal to 20 min; aqueous phase/organic phase volumetric ratio equal to 5; stripping with 1 M NaOH, stripping time equal to 30 min, and stripping phase/organic phase volumetric ratio equal to 5. 95% of germanium were recovered from water leachates using those conditions.« less
First passage properties of a generalized Pólya urn
NASA Astrophysics Data System (ADS)
Kearney, Michael J.; Martin, Richard J.
2016-12-01
A generalized two-component Pólya urn process, parameterized by a variable α , is studied in terms of the likelihood that due to fluctuations the initially smaller population in a scenario of competing population growth eventually becomes the larger, or is the larger after a certain passage of time. By casting the problem as an inhomogeneous directed random walk we quantify this role-reversal phenomenon through the first passage probability that equality in size is first reached at a given time, and the related exit probability that equality in size is reached no later than a given time. Using an embedding technique, exact results are obtained which complement existing results and provide new insights into behavioural changes (akin to phase transitions) which occur at defined values of α .
Adsorption process to recover hydrogen from feed gas mixtures having low hydrogen concentration
Golden, Timothy Christopher; Weist, Jr., Edward Landis; Hufton, Jeffrey Raymond; Novosat, Paul Anthony
2010-04-13
A process for selectively separating hydrogen from at least one more strongly adsorbable component in a plurality of adsorption beds to produce a hydrogen-rich product gas from a low hydrogen concentration feed with a high recovery rate. Each of the plurality of adsorption beds subjected to a repetitive cycle. The process comprises an adsorption step for producing the hydrogen-rich product from a feed gas mixture comprising 5% to 50% hydrogen, at least two pressure equalization by void space gas withdrawal steps, a provide purge step resulting in a first pressure decrease, a blowdown step resulting in a second pressure decrease, a purge step, at least two pressure equalization by void space gas introduction steps, and a repressurization step. The second pressure decrease is at least 2 times greater than the first pressure decrease.
PAM-4 delivery based on pre-distortion and CMMA equalization in a ROF system at 40 GHz
NASA Astrophysics Data System (ADS)
Zhou, Wen; Zhang, Jiao; Han, Xifeng; Kong, Miao; Gou, Pengqi
2018-06-01
In this paper, we proposed a PAM-4 delivery in a ROF system at 40-GHz. PAM-4 transmission data can be generated via look-up table (LUT) pre-distortion, then delivered over 25km single-mode fiber and 0.5m wireless link. At the receiver side, the received signal can be processed with cascaded multi-module algorithm (CMMA) equalization to improve the decision precision. Our measured results show that 10Gbaud PAM-4 transmission in a ROF system at 40-GHz can be achieved with BER of 1.6 × 10-3. To our knowledge, this is the first time to introduce LUT pre-distortion and CMMA equalization in a ROF system to improve signal performance.
Dual-Task Processing When Task 1 Is Hard and Task 2 Is Easy: Reversed Central Processing Order?
ERIC Educational Resources Information Center
Leonhard, Tanja; Fernandez, Susana Ruiz; Ulrich, Rolf; Miller, Jeff
2011-01-01
Five psychological refractory period (PRP) experiments were conducted with an especially time-consuming first task (Experiments 1, 3, and 5: mental rotation; Experiments 2 and 4: memory scanning) and with equal emphasis on the first task and on the second (left-right tone judgment). The standard design with varying stimulus onset asynchronies…
Pihlajaniemi, Ville; Sipponen, Satu; Sipponen, Mika H; Pastinen, Ossi; Laakso, Simo
2014-02-01
In the enzymatic hydrolysis of lignocellulose materials, the recycling of the solid residue has previously been considered within the context of enzyme recycling. In this study, a steady state investigation of a solids-recycling process was made with pretreated wheat straw and compared to sequential and batch hydrolysis at constant reaction times, substrate feed and liquid and enzyme consumption. Compared to batch hydrolysis, the recycling and sequential processes showed roughly equal hydrolysis yields, while the volumetric productivity was significantly increased. In the 72h process the improvement was 90% due to an increased reaction consistency, while the solids feed was 16% of the total process constituents. The improvement resulted primarily from product removal, which was equally efficient in solids-recycling and sequential hydrolysis processes. No evidence of accumulation of enzymes beyond the accumulation of the substrate was found in recycling. A mathematical model of solids-recycling was constructed, based on a geometrical series. Copyright © 2013 Elsevier Ltd. All rights reserved.
Construction of moment-matching multinomial lattices using Vandermonde matrices and Gröbner bases
NASA Astrophysics Data System (ADS)
Lundengârd, Karl; Ogutu, Carolyne; Silvestrov, Sergei; Ni, Ying; Weke, Patrick
2017-01-01
In order to describe and analyze the quantitative behavior of stochastic processes, such as the process followed by a financial asset, various discretization methods are used. One such set of methods are lattice models where a time interval is divided into equal time steps and the rate of change for the process is restricted to a particular set of values in each time step. The well-known binomial- and trinomial models are the most commonly used in applications, although several kinds of higher order models have also been examined. Here we will examine various ways of designing higher order lattice schemes with different node placements in order to guarantee moment-matching with the process.
The effect of centralization of health care services on travel time and its equality.
Kobayashi, Daisuke; Otsubo, Tetsuya; Imanaka, Yuichi
2015-03-01
To analyze the regional variations in travel time between patient residences and medical facilities for the treatment of ischemic heart disease and breast cancer, and to simulate the effects of health care services centralization on travel time and equality of access. We used medical insurance claims data for inpatients and outpatients for the two target diseases that had been filed between September 2008 and May 2009 in Kyoto Prefecture, Japan. Using a geographical information system, patient travel times were calculated based on the driving distance between patient residences and hospitals via highways and toll roads. Locations of residences and hospital locations were identified using postal codes. We then conducted a simulation analysis of centralization of health care services to designated regional core hospitals. The simulated changes in potential spatial access to care were examined. Inequalities in access to care were examined using Gini coefficients, which ranged from 0.4109 to 0.4574. Simulations of health care services centralization showed reduced travel time for most patients and overall improvements in equality of access, except in breast cancer outpatients. Our findings may contribute to the decision-making process in policies aimed at improving the potential spatial access to health care services. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Changing Concepts of Educational Equality
ERIC Educational Resources Information Center
Cropley, A. J.
1976-01-01
States that if educational equality is defined, not in terms of equal learning facilities and equally well qualified teachers, but in terms of equal outcomes, equality has not been achieved. True equality implies recognition of learning as life-long process, for the purpose of self-fulfillment. (Author/RW)
Improving Passive Time Reversal Underwater Acoustic Communications Using Subarray Processing.
He, Chengbing; Jing, Lianyou; Xi, Rui; Li, Qinyuan; Zhang, Qunfei
2017-04-24
Multichannel receivers are usually employed in high-rate underwater acoustic communication to achieve spatial diversity. In the context of multichannel underwater acoustic communications, passive time reversal (TR) combined with a single-channel adaptive decision feedback equalizer (TR-DFE) is a low-complexity solution to achieve both spatial and temporal focusing. In this paper, we present a novel receiver structure to combine passive time reversal with a low-order multichannel adaptive decision feedback equalizer (TR-MC-DFE) to improve the performance of the conventional TR-DFE. First, the proposed method divides the whole received array into several subarrays. Second, we conduct passive time reversal processing in each subarray. Third, the multiple subarray outputs are equalized with a low-order multichannel DFE. We also investigated different channel estimation methods, including least squares (LS), orthogonal matching pursuit (OMP), and improved proportionate normalized least mean squares (IPNLMS). The bit error rate (BER) and output signal-to-noise ratio (SNR) performances of the receiver algorithms are evaluated using simulation and real data collected in a lake experiment. The source-receiver range is 7.4 km, and the data rate with quadrature phase shift keying (QPSK) signal is 8 kbits/s. The uncoded BER of the single input multiple output (SIMO) systems varies between 1 × 10 - 1 and 2 × 10 - 2 for the conventional TR-DFE, and between 1 × 10 - 2 and 1 × 10 - 3 for the proposed TR-MC-DFE when eight hydrophones are utilized. Compared to conventional TR-DFE, the average output SNR of the experimental data is enhanced by 3 dB.
Improving Passive Time Reversal Underwater Acoustic Communications Using Subarray Processing
He, Chengbing; Jing, Lianyou; Xi, Rui; Li, Qinyuan; Zhang, Qunfei
2017-01-01
Multichannel receivers are usually employed in high-rate underwater acoustic communication to achieve spatial diversity. In the context of multichannel underwater acoustic communications, passive time reversal (TR) combined with a single-channel adaptive decision feedback equalizer (TR-DFE) is a low-complexity solution to achieve both spatial and temporal focusing. In this paper, we present a novel receiver structure to combine passive time reversal with a low-order multichannel adaptive decision feedback equalizer (TR-MC-DFE) to improve the performance of the conventional TR-DFE. First, the proposed method divides the whole received array into several subarrays. Second, we conduct passive time reversal processing in each subarray. Third, the multiple subarray outputs are equalized with a low-order multichannel DFE. We also investigated different channel estimation methods, including least squares (LS), orthogonal matching pursuit (OMP), and improved proportionate normalized least mean squares (IPNLMS). The bit error rate (BER) and output signal-to-noise ratio (SNR) performances of the receiver algorithms are evaluated using simulation and real data collected in a lake experiment. The source-receiver range is 7.4 km, and the data rate with quadrature phase shift keying (QPSK) signal is 8 kbits/s. The uncoded BER of the single input multiple output (SIMO) systems varies between 1×10−1 and 2×10−2 for the conventional TR-DFE, and between 1×10−2 and 1×10−3 for the proposed TR-MC-DFE when eight hydrophones are utilized. Compared to conventional TR-DFE, the average output SNR of the experimental data is enhanced by 3 dB. PMID:28441763
Extraction of astaxanthin from microalgae: process design and economic feasibility study
NASA Astrophysics Data System (ADS)
Zgheib, Nancy; Saade, Roxana; Khallouf, Rindala; Takache, Hosni
2018-03-01
In this work, the process design and the economic feasibility of natural astaxanthin extraction fromHaematococcus pluvialisspecies have been reported. Complete process drawing of the process was first performed, and then the process was designed including five main steps being the harvesting process, the cell disruption, the spray drying, the supercritical CO2extraction and the anaerobic digestion. The major components of the facility would include sedimentation tanks, a disk stack centrifuge, a bed miller, a spray dryer, a multistage compressor, an extractor, a pasteurizer and a digester. All units have been sized assuming a 10 kg/h of dried biomass as a feedstock to produce nearly 2592 kg of astaxanthin per year. The investment payback time and the return on investment were all estimated for different market prices of astaxanthin. Based on the results the production process was found to become economically feasible for a market price higher than 1500/Kg. Also, a payback period of 1 year and an ROI equal to 113% was estimated for an astaxanthin market price equal to 6000/Kg.
40 CFR 63.8005 - What requirements apply to my process vessels?
Code of Federal Regulations, 2012 CFR
2012-07-01
... temperature, as required by § 63.1257(d)(3)(iii)(B), you may elect to measure the liquid temperature in the... the daily averages specified in § 63.998(b)(3). An operating block is a period of time that is equal to the time from the beginning to end of an emission episode or sequence of emission episodes. (g...
40 CFR 63.8005 - What requirements apply to my process vessels?
Code of Federal Regulations, 2013 CFR
2013-07-01
... temperature, as required by § 63.1257(d)(3)(iii)(B), you may elect to measure the liquid temperature in the... the daily averages specified in § 63.998(b)(3). An operating block is a period of time that is equal to the time from the beginning to end of an emission episode or sequence of emission episodes. (g...
40 CFR 63.8005 - What requirements apply to my process vessels?
Code of Federal Regulations, 2014 CFR
2014-07-01
... temperature, as required by § 63.1257(d)(3)(iii)(B), you may elect to measure the liquid temperature in the... the daily averages specified in § 63.998(b)(3). An operating block is a period of time that is equal to the time from the beginning to end of an emission episode or sequence of emission episodes. (g...
Testing the causality of Hawkes processes with time reversal
NASA Astrophysics Data System (ADS)
Cordi, Marcus; Challet, Damien; Muni Toke, Ioane
2018-03-01
We show that univariate and symmetric multivariate Hawkes processes are only weakly causal: the true log-likelihoods of real and reversed event time vectors are almost equal, thus parameter estimation via maximum likelihood only weakly depends on the direction of the arrow of time. In ideal (synthetic) conditions, tests of goodness of parametric fit unambiguously reject backward event times, which implies that inferring kernels from time-symmetric quantities, such as the autocovariance of the event rate, only rarely produce statistically significant fits. Finally, we find that fitting financial data with many-parameter kernels may yield significant fits for both arrows of time for the same event time vector, sometimes favouring the backward time direction. This goes to show that a significant fit of Hawkes processes to real data with flexible kernels does not imply a definite arrow of time unless one tests it.
Destruction of humic substances by pulsed electrical discharge
NASA Astrophysics Data System (ADS)
Lobanova, G. L.; Yurmazova, T. A.; Shiyan, L. N.; Machekhina, K. I.; Davidenko, M. A.
2017-01-01
Currently, the water recourses in the territory of Tomsk region are groundwater which is limited to the high concentration of iron and manganese ions and organic substances. These impurities present in water in different forms such as soluble salts ant the colloid forms. Therefore, the present work is a part of a continuations researcher of the processes in natural waters containing humic substances at the influence of pulsed electrical discharges in a layer of iron pellets. It is shown that the main stage of water purification process of humic substances during treatment by pulsed electric discharge in the layer of iron granules is a difficult process including several stages such as formation of iron oxyhydroxide colloid particles, sorption and coagulation with humic macromolecules substances, growth of particle dispersed phase and precipitation. The reason for the formation and coagulation of the dispersed phase is a different state of charge of the colloid particles (zeta potentials of (Fe (OH)3) is +8 mV, zeta potentials of (Humic substances) is -70 mV. The most intense permanganate oxidation reduction to the maximum permissible concentration occurs at the processing time equal to 10 seconds. The contact time of active erosion products with sodium humate is established and it equals to 1 hour. The value of permanganate oxidation achieves maximum permissible concentration during this time and iron concentration in solution achieves maximum permissible concentration after filtration.
Theurich, Gordon R.
1976-01-01
1. In a method of separating isotopes in a high speed gas centrifuge wherein a vertically oriented cylindrical rotor bowl is adapted to rotate about its axis within an evacuated chamber, and wherein an annular molecular pump having an intake end and a discharge end encircles the uppermost portion of said rotor bowl, said molecular pump being attached along its periphery in a leak-tight manner to said evacuated chamber, and wherein end cap closure means are affixed to the upper end of said rotor bowl, and a process gas withdrawal and insertion system enters said bowl through said end cap closure means, said evacuated chamber, molecular pump and end cap defining an upper zone at the discharge end of said molecular pump, said evacuated chamber, molecular pump and rotor bowl defining a lower annular zone at the intake end of said molecular pump, a method for removing gases from said upper and lower zones during centrifuge operation with a minimum loss of process gas from said rotor bowl, comprising, in combination: continuously measuring the pressure in said upper zone, pumping gas from said lower zone from the time the pressure in said upper zone equals a first preselected value until the pressure in said upper zone is equal to a second preselected value, said first preselected value being greater than said second preselected value, and continuously pumping gas from said upper zone from the time the pressure in said upper zone equals a third preselected value until the pressure in said upper zone is equal to a fourth preselected value, said third preselected value being greater than said first, second and fourth preselected values.
23 CFR Appendix D to Subpart D of... - Equal Opportunity Compliance Review Process Flow Chart
Code of Federal Regulations, 2010 CFR
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Equal Opportunity Compliance Review Process Flow Chart D Appendix D to Subpart D of Part 230 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION..., Subpt. D, App. D Appendix D to Subpart D of Part 230—Equal Opportunity Compliance Review Process Flow...
A method for determining the weak statistical stationarity of a random process
NASA Technical Reports Server (NTRS)
Sadeh, W. Z.; Koper, C. A., Jr.
1978-01-01
A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.
Ikonos Imagery Product Nonuniformity Assessment
NASA Technical Reports Server (NTRS)
Ryan, Robert; Zanoni, Vicki; Pagnutti, Mary; Holekamp, Kara; Smith, Charles
2002-01-01
During the early stages of the NASA Scientific Data Purchase (SDP) program, three approximately equal vertical stripes were observable in the IKONOS imagery of highly spatially uniform sites. Although these effects appeared to be less than a few percent of the mean signal, several investigators requested new imagery. Over time, Space Imaging updated its processing to minimize these artifacts. This however, produced differences in Space Imaging products derived from archive imagery processed at different times. Imagery processed before 2/22/01 is processed with one set of coefficients, while imagery processed after that date requires another set. Space Imaging produces its products from raw imagery, so changes in the ground processing over time can change the delivered digital number (DN) values, even for identical orders of a previously acquired scene. NASA Stennis initiated studies to investigate the magnitude and changes in these artifacts over the lifetime of the system and before and after processing updates.
Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.
2017-01-01
A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.
Stites, Mallory C.; Federmeier, Kara D.; Christianson, Kiel
2016-08-06
We investigate the online processing consequences of encountering compound words with transposed letters (TLs), in order to determine if cross-morpheme TLs are more disruptive to reading than those within a single morpheme, as would be predicted by accounts of obligatory morpho-orthopgrahic decomposition. Two measures of online processing, eye movements and event-related potentials (ERPs), were collected in separate experiments. Participants read sentences containing correctly spelled compound words (cupcake), or compounds with TLs occurring either across morphemes (cucpake) or within one morpheme (cupacke). Results showed that between- and within-morpheme transpositions produced equal processing costs in both measures, in the form of longermore » reading times (Experiment 1) and a late posterior positivity (Experiment 2) that did not differ between conditions. Our findings converge to suggest that within- and between-morpheme TLs are equally disruptive to recognition, providing evidence against obligatory morpho-orthographic processing and in favour of whole-word access of English compound words during sentence reading.« less
Stites, Mallory C.; Federmeier, Kara D.; Christianson, Kiel
2017-01-01
The current study investigates the online processing consequences of encountering compound words with transposed letters (TLs), to determine if TLs that cross morpheme boundaries are more disruptive to reading than those within a single morpheme, as would be predicted by accounts of obligatory morpho-orthopgrahic decomposition. Two measures of online processing, eye movements and event-related potentials (ERPs), were collected in separate experiments. Participants read sentences containing correctly spelled compound words (cupcake), or compounds with TLs occurring either across morpheme boundaries (cucpake) or within one morpheme (cupacke). Results showed that between- and within-morpheme transpositions produced equal processing costs in both measures, in the form of longer reading times (Experiment 1) and a late posterior positivity (Experiment 2) that did not differ between conditions. Findings converge to suggest that within- and between-morpheme TLs are equally disruptive to recognition, providing evidence against obligatory morpho-orthographic processing and in favor of whole-word access of English compound words during sentence reading. PMID:28791313
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stites, Mallory C.; Federmeier, Kara D.; Christianson, Kiel
We investigate the online processing consequences of encountering compound words with transposed letters (TLs), in order to determine if cross-morpheme TLs are more disruptive to reading than those within a single morpheme, as would be predicted by accounts of obligatory morpho-orthopgrahic decomposition. Two measures of online processing, eye movements and event-related potentials (ERPs), were collected in separate experiments. Participants read sentences containing correctly spelled compound words (cupcake), or compounds with TLs occurring either across morphemes (cucpake) or within one morpheme (cupacke). Results showed that between- and within-morpheme transpositions produced equal processing costs in both measures, in the form of longermore » reading times (Experiment 1) and a late posterior positivity (Experiment 2) that did not differ between conditions. Our findings converge to suggest that within- and between-morpheme TLs are equally disruptive to recognition, providing evidence against obligatory morpho-orthographic processing and in favour of whole-word access of English compound words during sentence reading.« less
One-loop gravitational wave spectrum in de Sitter spacetime
NASA Astrophysics Data System (ADS)
Fröb, Markus B.; Roura, Albert; Verdaguer, Enric
2012-08-01
The two-point function for tensor metric perturbations around de Sitter spacetime including one-loop corrections from massless conformally coupled scalar fields is calculated exactly. We work in the Poincaré patch (with spatially flat sections) and employ dimensional regularization for the renormalization process. Unlike previous studies we obtain the result for arbitrary time separations rather than just equal times. Moreover, in contrast to existing results for tensor perturbations, ours is manifestly invariant with respect to the subgroup of de Sitter isometries corresponding to a simultaneous time translation and rescaling of the spatial coordinates. Having selected the right initial state for the interacting theory via an appropriate iepsilon prescription is crucial for that. Finally, we show that although the two-point function is a well-defined spacetime distribution, the equal-time limit of its spatial Fourier transform is divergent. Therefore, contrary to the well-defined distribution for arbitrary time separations, the power spectrum is strictly speaking ill-defined when loop corrections are included.
Quality and loudness judgments for music subjected to compression limiting.
Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M
2012-08-01
Dynamic-range compression (DRC) is used in the music industry to maximize loudness. The amount of compression applied to commercial recordings has increased over time due to a motivating perspective that louder music is always preferred. In contrast to this viewpoint, artists and consumers have argued that using large amounts of DRC negatively affects the quality of music. However, little research evidence has supported the claims of either position. The present study investigated how DRC affects the perceived loudness and sound quality of recorded music. Rock and classical music samples were peak-normalized and then processed using different amounts of DRC. Normal-hearing listeners rated the processed and unprocessed samples on overall loudness, dynamic range, pleasantness, and preference, using a scaled paired-comparison procedure in two conditions: un-equalized, in which the loudness of the music samples varied, and loudness-equalized, in which loudness differences were minimized. Results indicated that a small amount of compression was preferred in the un-equalized condition, but the highest levels of compression were generally detrimental to quality, whether loudness was equalized or varied. These findings are contrary to the "louder is better" mentality in the music industry and suggest that more conservative use of DRC may be preferred for commercial music.
36 CFR 1234.24 - How does NARA process a waiver request?
Code of Federal Regulations, 2010 CFR
2010-07-01
... ADMINISTRATION RECORDS MANAGEMENT FACILITY STANDARDS FOR RECORDS STORAGE FACILITIES Handling Deviations From NARA... alternative offers at least equal protection to Federal records, NARA will consult the appropriate industry... actions and time frames for bringing the facility into compliance are reasonable. (2) If NARA questions...
36 CFR 1234.24 - How does NARA process a waiver request?
Code of Federal Regulations, 2011 CFR
2011-07-01
... ADMINISTRATION RECORDS MANAGEMENT FACILITY STANDARDS FOR RECORDS STORAGE FACILITIES Handling Deviations From NARA... alternative offers at least equal protection to Federal records, NARA will consult the appropriate industry... actions and time frames for bringing the facility into compliance are reasonable. (2) If NARA questions...
The effects of quantity and depth of processing on children's time perception.
Arlin, M
1986-08-01
Two experiments were conducted to investigate the effects of quantity and depth of processing on children's time perception. These experiments tested the appropriateness of two adult time-perception models (attentional and storage size) for younger ages. Children were given stimulus sets of equal time which varied by level of processing (deep/shallow) and quantity (list length). In the first experiment, 28 children in Grade 6 reproduced presentation times of various quantities of pictures under deep (living/nonliving categorization) or shallow (repeating label) conditions. Students also compared pairs of durations. In the second experiment, 128 children in Grades K, 2, 4, and 6 reproduced presentation times under similar conditions with three or six pictures and with deep or shallow processing requirements. Deep processing led to decreased estimation of time. Higher quantity led to increased estimation of time. Comparative judgments were influenced by quantity. The interaction between age and depth of processing was significant. Older children were more affected by depth differences than were younger children. Results were interpreted as supporting different aspects of each adult model as explanations of children's time perception. The processing effect supported the attentional model and the quantity effect supported the storage size model.
The neural bases for valuing social equality.
Aoki, Ryuta; Yomogida, Yukihito; Matsumoto, Kenji
2015-01-01
The neural basis of how humans value and pursue social equality has become a major topic in social neuroscience research. Although recent studies have identified a set of brain regions and possible mechanisms that are involved in the neural processing of equality of outcome between individuals, how the human brain processes equality of opportunity remains unknown. In this review article, first we describe the importance of the distinction between equality of outcome and equality of opportunity, which has been emphasized in philosophy and economics. Next, we discuss possible approaches for empirical characterization of human valuation of equality of opportunity vs. equality of outcome. Understanding how these two concepts are distinct and interact with each other may provide a better explanation of complex human behaviors concerning fairness and social equality. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
A joint equalization algorithm in high speed communication systems
NASA Astrophysics Data System (ADS)
Hao, Xin; Lin, Changxing; Wang, Zhaohui; Cheng, Binbin; Deng, Xianjin
2018-02-01
This paper presents a joint equalization algorithm in high speed communication systems. This algorithm takes the advantages of traditional equalization algorithms to use pre-equalization and post-equalization. The pre-equalization algorithm takes the advantage of CMA algorithm, which is not sensitive to the frequency offset. Pre-equalization is located before the carrier recovery loop in order to make the carrier recovery loop a better performance and overcome most of the frequency offset. The post-equalization takes the advantage of MMA algorithm in order to overcome the residual frequency offset. This paper analyzes the advantages and disadvantages of several equalization algorithms in the first place, and then simulates the proposed joint equalization algorithm in Matlab platform. The simulation results shows the constellation diagrams and the bit error rate curve, both these results show that the proposed joint equalization algorithm is better than the traditional algorithms. The residual frequency offset is shown directly in the constellation diagrams. When SNR is 14dB, the bit error rate of the simulated system with the proposed joint equalization algorithm is 103 times better than CMA algorithm, 77 times better than MMA equalization, and 9 times better than CMA-MMA equalization.
Real-time trichromatic holographic interferometry: preliminary study
NASA Astrophysics Data System (ADS)
Albe, Felix; Bastide, Myriam; Desse, Jean-Michel; Tribillon, Jean-Louis H.
1998-08-01
In this paper we relate our preliminary experiments on real- time trichromatic holographic interferometry. For this purpose a CW `white' laser (argon and krypton of Coherent- Radiation, Spectrum model 70) is used. This laser produces about 10 wavelengths. A system consisting of birefringent plates and polarizers allows to select a trichromatic TEM00 triplet: blue line ((lambda) equals 476 nm, 100 mW), green line ((lambda) equals 514 nm, 100 mW) and red line ((lambda) equals 647 nm, 100 mW). In a first stage we recorded a trichromatic reflection hologram with a separate reference beam on a single-layer silver-halide panchromatic plate (PFG 03C). After processing, the hologram is put back into the original recording set-up, as in classical experiments on real-time monochromatic holographic interferometry. So we observe interference fringes between the 3 reconstructed waves and the 3 actual waves. The interference fringes of the phenomenon are observed on a screen and recorded by a video camera at 25 frames per second. A color video film of about 3 minutes of duration is presented. Some examples related to phase objects are presented (hot airflow from a candle, airflow from a hand). The actual results show the possibility of using this technique to study, in real time, aerodynamic wakes and mechanical deformation.
Elastohydrodynamic lubrication theory
NASA Technical Reports Server (NTRS)
Hamrock, B. J.; Dowson, D.
1982-01-01
The isothermal elastohydrodynamic lubrication (EHL) of a point contact was analyzed numerically by simultaneously solving the elasticity and Reynolds equations. In the elasticity analysis the contact zone was divided into equal rectangular areas, and it was assumed that a uniform pressure was applied over each area. In the numerical analysis of the Reynolds equation, a phi analysis (where phi is equal to the pressure times the film thickness to the 3/2 power) was used to help the relaxation process. The EHL point contact analysis is applicable for the entire range of elliptical parameters and is valid for any combination of rolling and sliding within the contact.
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
Equality in Education: An Equality of Condition Perspective
ERIC Educational Resources Information Center
Lynch, Kathleen; Baker, John
2005-01-01
Transforming schools into truly egalitarian institutions requires a holistic and integrated approach. Using a robust conception of "equality of condition", we examine key dimensions of equality that are central to both the purposes and processes of education: equality in educational and related resources; equality of respect and recognition;…
OD (Organization Development) Interventions that Enhance Equal Opportunity.
1983-09-01
aide to aaesam mod Identify by week nim obe) Socialization process Socialization model Self-esteem Organization form and structure Equal Opportunity...itself speaks to the way individuals are socialized into the Navy or a perceived lack of socialization . 7 pi ’..7 ’ .. . . 7 Today the Equal ... equal opportunity. Analysis of five different dimensions of the socialization process can be thought of as distinct "tactics" which managers (agents
Girard, Pascal; Koenig-Robert, Roger
2011-01-01
Background Comparative studies of cognitive processes find similarities between humans and apes but also monkeys. Even high-level processes, like the ability to categorize classes of object from any natural scene under ultra-rapid time constraints, seem to be present in rhesus macaque monkeys (despite a smaller brain and the lack of language and a cultural background). An interesting and still open question concerns the degree to which the same images are treated with the same efficacy by humans and monkeys when a low level cue, the spatial frequency content, is controlled. Methodology/Principal Findings We used a set of natural images equalized in Fourier spectrum and asked whether it is still possible to categorize them as containing an animal and at what speed. One rhesus macaque monkey performed a forced-choice saccadic task with a good accuracy (67.5% and 76% for new and familiar images respectively) although performance was lower than with non-equalized images. Importantly, the minimum reaction time was still very fast (100 ms). We compared the performances of human subjects with the same setup and the same set of (new) images. Overall mean performance of humans was also lower than with original images (64% correct) but the minimum reaction time was still short (140 ms). Conclusion Performances on individual images (% correct but not reaction times) for both humans and the monkey were significantly correlated suggesting that both species use similar features to perform the task. A similar advantage for full-face images was seen for both species. The results also suggest that local low spatial frequency information could be important, a finding that fits the theory that fast categorization relies on a rapid feedforward magnocellular signal. PMID:21326600
A Brownian model for recurrent earthquakes
Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.
2002-01-01
We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.
Can Planning Time Compensate for Individual Differences in Working Memory Capacity?
ERIC Educational Resources Information Center
Nielson, Katharine B.
2014-01-01
Language learners with high working memory capacity have an advantage, all other factors being equal, during the second language acquisition (SLA) process; therefore, identifying a pedagogical intervention that can compensate for low working memory capacity would be advantageous to language learners and instructors. Extensive research on the…
28 CFR 100.21 - Alternative dispute resolution.
Code of Federal Regulations, 2012 CFR
2012-07-01
... impasse arises in negotiations between the FBI and the carrier which precludes the execution of a cooperative agreement, the FBI will consider using mediation with the goal of achieving, in a timely fashion... carrier agree to mediation, the costs of that mediation process shall be shared equally by the FBI and the...
28 CFR 100.21 - Alternative dispute resolution.
Code of Federal Regulations, 2011 CFR
2011-07-01
... impasse arises in negotiations between the FBI and the carrier which precludes the execution of a cooperative agreement, the FBI will consider using mediation with the goal of achieving, in a timely fashion... carrier agree to mediation, the costs of that mediation process shall be shared equally by the FBI and the...
28 CFR 100.21 - Alternative dispute resolution.
Code of Federal Regulations, 2010 CFR
2010-07-01
... impasse arises in negotiations between the FBI and the carrier which precludes the execution of a cooperative agreement, the FBI will consider using mediation with the goal of achieving, in a timely fashion... carrier agree to mediation, the costs of that mediation process shall be shared equally by the FBI and the...
28 CFR 100.21 - Alternative dispute resolution.
Code of Federal Regulations, 2014 CFR
2014-07-01
... impasse arises in negotiations between the FBI and the carrier which precludes the execution of a cooperative agreement, the FBI will consider using mediation with the goal of achieving, in a timely fashion... carrier agree to mediation, the costs of that mediation process shall be shared equally by the FBI and the...
28 CFR 100.21 - Alternative dispute resolution.
Code of Federal Regulations, 2013 CFR
2013-07-01
... impasse arises in negotiations between the FBI and the carrier which precludes the execution of a cooperative agreement, the FBI will consider using mediation with the goal of achieving, in a timely fashion... carrier agree to mediation, the costs of that mediation process shall be shared equally by the FBI and the...
NASA Astrophysics Data System (ADS)
Haron, Adib; Mahdzair, Fazren; Luqman, Anas; Osman, Nazmie; Junid, Syed Abdul Mutalib Al
2018-03-01
One of the most significant constraints of Von Neumann architecture is the limited bandwidth between memory and processor. The cost to move data back and forth between memory and processor is considerably higher than the computation in the processor itself. This architecture significantly impacts the Big Data and data-intensive application such as DNA analysis comparison which spend most of the processing time to move data. Recently, the in-memory processing concept was proposed, which is based on the capability to perform the logic operation on the physical memory structure using a crossbar topology and non-volatile resistive-switching memristor technology. This paper proposes a scheme to map digital equality comparator circuit on memristive memory crossbar array. The 2-bit, 4-bit, 8-bit, 16-bit, 32-bit, and 64-bit of equality comparator circuit are mapped on memristive memory crossbar array by using material implication logic in a sequential and parallel method. The simulation results show that, for the 64-bit word size, the parallel mapping exhibits 2.8× better performance in total execution time than sequential mapping but has a trade-off in terms of energy consumption and area utilization. Meanwhile, the total crossbar area can be reduced by 1.2× for sequential mapping and 1.5× for parallel mapping both by using the overlapping technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dooley, James H; Lanning, David N
Comminution process of wood veneer to produce wood particles, by feeding wood veneer in a direction of travel substantially normal to grain through a counter rotating pair of intermeshing arrays of cutting discs arrayed axially perpendicular to the direction of veneer travel, wherein the cutting discs have a uniform thickness (Td), to produce wood particles characterized by a length dimension (L) substantially equal to the Td and aligned substantially parallel to grain, a width dimension (W) normal to L and aligned cross grain, and a height dimension (H) substantially equal to the veneer thickness (Tv) and aligned normal to Wmore » and L, wherein the W.times.H dimensions define a pair of substantially parallel end surfaces with end checking between crosscut fibers.« less
Study of adsorption process of iron colloid substances on activated carbon by ultrasound
NASA Astrophysics Data System (ADS)
Machekhina, K. I.; Shiyan, L. N.; Yurmazova, T. A.; Voyno, D. A.
2015-04-01
The paper reports on the adsorption of iron colloid substances on activated carbon (PAC) Norit SA UF with using ultrasound. It is found that time of adsorption is equal to three hours. High-frequency electrical oscillation is 35 kHz. The adsorption capacity of activated carbon was determined and it is equal to about 0.25 mg iron colloid substances /mg PAC. The iron colloid substances size ranging from 30 to 360 nm was determined. The zeta potential of iron colloid substances which consists of iron (III) hydroxide, silicon compounds and natural organic substances is about (-38mV). The process of destruction iron colloid substances occurs with subsequent formation of a precipitate in the form of Fe(OH)3 as a result of the removal of organic substances from the model solution.
Jitter model and signal processing techniques for pulse width modulation optical recording
NASA Technical Reports Server (NTRS)
Liu, Max M.-K.
1991-01-01
A jitter model and signal processing techniques are discussed for data recovery in Pulse Width Modulation (PWM) optical recording. In PWM, information is stored through modulating sizes of sequential marks alternating in magnetic polarization or in material structure. Jitter, defined as the deviation from the original mark size in the time domain, will result in error detection if it is excessively large. A new approach is taken in data recovery by first using a high speed counter clock to convert time marks to amplitude marks, and signal processing techniques are used to minimize jitter according to the jitter model. The signal processing techniques include motor speed and intersymbol interference equalization, differential and additive detection, and differential and additive modulation.
ERIC Educational Resources Information Center
Crabtree, Charlotte; Nash, Gary B.
Developed through a broad-based national consensus building process, the national history standards project has involved working toward agreement both on the larger purposes of history in the school curriculum and on the more specific history understandings and thinking processes all students should have equal opportunity to acquire over 12 years…
Time-Frequency Representations for Speech Signals.
1987-06-01
and subsequent processing can take these weights into account . This is, in principle , safer, but pratically it is much harder to think about processing...and frequency along the other. But how should this idea be made precise (the well-known uncertainty principle of fourier analysis is one of the thorny...produce similar results. q2.3. Non-stationarity 19 it is the unique shape that meets the uncertainty principle with equality. 2.2. The quasi-stationary
... Awards Governance Bylaws Leadership Board of Directors Nomination Process Organizational Structure ATS Website Ethics & COI Health Equality Innovations in Health Equality Award Committee on Health Equality ...
12 CFR 268.202 - Equal Pay Act.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Equal Pay Act. 268.202 Section 268.202 Banks... REGARDING EQUAL OPPORTUNITY Provisions Applicable to Particular Complaints § 268.202 Equal Pay Act. Complaints alleging violations of the Equal Pay Act shall be processed under this part. ...
The excited-state decay of 1-methyl-2(1H)-pyrimidinone is an activated process.
Ryseck, Gerald; Schmierer, Thomas; Haiser, Karin; Schreier, Wolfgang; Zinth, Wolfgang; Gilch, Peter
2011-07-11
The photophysics of 1-methyl-2(1H)-pyrimidinone (1MP) dissolved in water is investigated by steady-state and time-resolved fluorescence, UV/Vis absorption, and IR spectroscopy. In the experiments, excitation light is tuned to the lowest-energy absorption band of 1MP peaking at 302 nm. At room temperature (291 K) its fluorescence lifetime amounts to 450 ps. With increasing temperature this lifetime decreases and equals 160 ps at 338 K. Internal conversion (IC) repopulating the ground state and intersystem crossing (ISC) to a triplet state are the dominant decay channels of the excited singlet state. At room temperature both channels contribute equally to the decay, that is, the quantum yields of IC and ISC are both approximately 0.5. The temperature dependence of UV/Vis transient absorption signals shows that the activation energy of the IC process (2140 cm(-1)) is higher than that of the ISC process (640 cm(-1)). Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Arthur, W J; Markham, O D
1984-04-01
Polonium-210 concentrations were determined for soil, vegetation and small mammal tissues collected at a solid radioactive waste disposal area, near a phosphate ore processing plant and at two rural areas in southeastern Idaho. Polonium concentrations in media sampled near the radioactive waste disposal facility were equal to or less than values from rural area samples, indicating that disposal of solid radioactive waste at the Idaho National Engineering Laboratory Site has not resulted in increased environmental levels of polonium. Concentrations of 210Po in soils, deer mice hide and carcass samples collected near the phosphate processing plant were statistically (P less than or equal to 0.05) greater than the other sampling locations; however, the mean 210Po concentration in soils and small mammal tissues from sampling areas near the phosphate plant were only four and three times greater, respectively, than control values. No statistical (P greater than 0.05) difference was observed for 210Po concentrations in vegetation among any of the sampling locations.
Novel Estimation of Pilot Performance Characteristics
NASA Technical Reports Server (NTRS)
Bachelder, Edward N.; Aponso, Bimal
2017-01-01
Two mechanisms internal to the pilot that affect performance during a tracking task are: 1) Pilot equalization (i.e. lead/lag); and 2) Pilot gain (i.e. sensitivity to the error signal). For some applications McRuer's Crossover Model can be used to anticipate what equalization will be employed to control a vehicle's dynamics. McRuer also established approximate time delays associated with different types of equalization - the more cognitive processing that is required due to equalization difficulty, the larger the time delay. However, the Crossover Model does not predict what the pilot gain will be. A nonlinear pilot control technique, observed and coined by the authors as 'amplitude clipping', is shown to improve stability, performance, and reduce workload when employed with vehicle dynamics that require high lead compensation by the pilot. Combining linear and nonlinear methods a novel approach is used to measure the pilot control parameters when amplitude clipping is present, allowing precise measurement in real time of key pilot control parameters. Based on the results of an experiment which was designed to probe workload primary drivers, a method is developed that estimates pilot spare capacity from readily observable measures and is tested for generality using multi-axis flight data. This paper documents the initial steps to developing a novel, simple objective metric for assessing pilot workload and its variation over time across a wide variety of tasks. Additionally, it offers a tangible, easily implementable methodology for anticipating a pilot's operating parameters and workload, and an effective design tool. The model shows promise in being able to precisely predict the actual pilot settings and workload, and observed tolerance of pilot parameter variation over the course of operation. Finally, an approach is proposed for generating Cooper-Harper ratings based on the workload and parameter estimation methodology.
NASA Astrophysics Data System (ADS)
Frazer, Gordon J.; Anderson, Stuart J.
1997-10-01
The radar returns from some classes of time-varying point targets can be represented by the discrete-time signal plus noise model: xt equals st plus [vt plus (eta) t] equals (summation)i equals o P minus 1 Aiej2(pi f(i)/f(s)t) plus vt plus (eta) t, t (epsilon) 0, . . ., N minus 1, fi equals kfI plus fo where the received signal xt corresponds to the radar return from the target of interest from one azimuth-range cell. The signal has an unknown number of components, P, unknown complex amplitudes Ai and frequencies fi. The frequency parameters fo and fI are unknown, although constrained such that fo less than fI/2 and parameter k (epsilon) {minus u, . . ., minus 2, minus 1, 0, 1, 2, . . ., v} is constrained such that the component frequencies fi are bound by (minus fs/2, fs/2). The noise term vt, is typically colored, and represents clutter, interference and various noise sources. It is unknown, except that (summation)tvt2 less than infinity; in general, vt is not well modelled as an auto-regressive process of known order. The additional noise term (eta) t represents time-invariant point targets in the same azimuth-range cell. An important characteristic of the target is the unknown parameter, fI, representing the frequency interval between harmonic lines. It is desired to determine an estimate of fI from N samples of xt. We propose an algorithm to estimate fI based on Thomson's harmonic line F-Test, which is part of the multi-window spectrum estimation method and demonstrate the proposed estimator applied to target echo time series collected using an experimental HF skywave radar.
NASA Astrophysics Data System (ADS)
Kimura, Masaaki; Inoue, Haruo; Kusaka, Masahiro; Kaizu, Koichi; Fuji, Akiyoshi
This paper describes an analysis method of the friction torque and weld interface temperature during the friction process for steel friction welding. The joining mechanism model of the friction welding for the wear and seizure stages was constructed from the actual joining phenomena that were obtained by the experiment. The non-steady two-dimensional heat transfer analysis for the friction process was carried out by calculation with FEM code ANSYS. The contact pressure, heat generation quantity, and friction torque during the wear stage were calculated using the coefficient of friction, which was considered as the constant value. The thermal stress was included in the contact pressure. On the other hand, those values during the seizure stage were calculated by introducing the coefficient of seizure, which depended on the seizure temperature. The relationship between the seizure temperature and the relative speed at the weld interface in the seizure stage was determined using the experimental results. In addition, the contact pressure and heat generation quantity, which depended on the relative speed of the weld interface, were solved by taking the friction pressure, the relative speed and the yield strength of the base material into the computational conditions. The calculated friction torque and weld interface temperatures of a low carbon steel joint were equal to the experimental results when friction pressures were 30 and 90 MPa, friction speed was 27.5 s-1, and weld interface diameter was 12 mm. The calculation results of the initial peak torque and the elapsed time for initial peak torque were also equal to the experimental results under the same conditions. Furthermore, the calculation results of the initial peak torque and the elapsed time for initial peak torque at various friction pressures were equal to the experimental results.
NASA Astrophysics Data System (ADS)
Voeikov, Vladimir L.; Naletov, Vladimir I.
1998-06-01
Nonenzymatic glycation of free or peptide bound amino acids (Maillard reaction, MR) plays an important role in aging, diabetic complications and atherosclerosis. MR taking place at high temperatures is accompanied by chemiluminescence (CL). Here kinetics of CL development in MR proceeding in model systems at room temperature has been analyzed for the first time. Brief heating of glycine and D-glucose solutions to t greater than 93 degrees Celsius results in their browning and appearance of fluorescencent properties. Developed In solutions rapidly cooled down to 20 degrees Celsius a wave of CL. It reached maximum intensity around 40 min after the reaction mixture heating and cooling it down. CL intensity elevation was accompanied by certain decoloration of the solution. Appearance of light absorbing substances and development of CL depended critically upon the temperature of preincubation (greater than or equal to 93 degrees Celsius), initial pH (greater than or equal to 11,2), sample volume (greater than or equal to 0.5 ml) and reagents concentrations. Dependence of total counts accumulation on a system volume over the critical volume was non-monotonous. After reaching maximum values CL began to decline, though only small part of glucose and glycin had been consumed. Brief heating of such solutions to the critical temperature resulted in emergence of a new CL wave. This procedure could be repeated in one and the same reaction system for several times. Whole CL kinetic curve best fitted to lognormal distribution. Macrokinetic properties of the process are characteristic of chain reactions with delayed branching. Results imply also, that self-organization occurs in this system, and that the course of the process strongly depends upon boundary conditions and periodic interference in its course.
The Effect of Caffeine on Endurance Time to Exhaustion at High Altitude
1989-04-27
decaffeinated tea, and Sweet and Low (Parve Co.). The caffeine drink was a mixture of the placebo, Equal (G.D. Searle & Co.), and anhydrous caffeine...DISCUSSION: It is well established that exposure to altitude affects a multitude of physiological processes which include and transcend the ventilatory...affected a multitude of physiological processes both at rest and during exercise, it is not clear what the mechanism(s) was which caused the improvement in
Self-similar crack-generation effects in the fracture process in brittle materials
NASA Astrophysics Data System (ADS)
Hilarov, V. L.
1998-07-01
Using acoustic-emission data banks we have computed time and space correlation functions for the purpose of investigation of crack-propagation self-similarity during the fracture process in brittle materials. It is shown that the whole fracture process may be represented as a two-stage process. In the first stage, the crack propagation is uniform and uncorrelated in space, having a time spectral density of the white-noise type and a correlation fractal dimension approximately equal to that of 3D Euclidean space. In the second stage, this fractal dimension decreases significantly, reaching the value of 2.2-2.4, characteristic for the fracture surfaces, while the time spectral density exhibits a significant low-frequency increase becoming of 0965-0393/6/4/002/img1-noise type. The resulting fractal shows no multifractal behaviour, appearing to be a single fractal.
From Free Expansion to Abrupt Compression of an Ideal Gas
ERIC Educational Resources Information Center
Anacleto, Joaquim; Pereira, Mario G.
2009-01-01
Using macroscopic thermodynamics, the general law for adiabatic processes carried out by an ideal gas was studied. It was shown that the process reversibility is characterized by the adiabatic reversibility coefficient r, in the range 0 [less than or equal] r [less than or equal] 1 for expansions and r [greater than or equal] 1 for compressions.…
A proposal for an 'equal peer-review' statement.
Moustafa, Khaled
2015-08-01
To make the peer-review process as objective as possible, I suggest the introduction of an 'equal peer-review' statement that preserves author anonymity across the board, thus removing any potential bias related to nominal or institutional 'prestige'; this would guarantee an equal peer-review process for all authors and grant applicants. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dettmer, Aline; Nunes, Keila Guerra Pacheco; Gutterres, Mariliz; Marcílio, Nilson Romeu
2010-04-15
Leather wastes tanned with chromium are generated during the production process of leather, hence the wastes from hand crafted goods and footwear industries are a serious environmental problem. The thermal treatment of leather wastes can be one of the treatment options because the wastes are rich in chromium and can be used as a raw material for sodium chromate production and further to obtain several chromium compounds. The objective of this study was to utilize the chromium from leather wastes via basic chromium sulfate production to be subsequently applied in a hide tanning. The obtained results have shown that this is the first successful attempt to achieve desired base properties of the product. The result was achieved when the following conditions were applied: a molar ratio between sodium sulfite and sodium dichromate equal to 6; reaction time equal to 5 min before addition of sulfuric acid; pH of sodium dichromate solution equal to 2. Summarizing, there is an opportunity to utilize the dangerous wastes and reused them in the production scheme by minimizing or annulling the environmental impact and to attend a sustainable process development concept. 2009 Elsevier B.V. All rights reserved.
Method of detecting system function by measuring frequency response
Morrison, John L.; Morrison, William H.
2008-07-01
Real time battery impedance spectrum is acquired using one time record, Compensated Synchronous Detection (CSD). This parallel method enables battery diagnostics. The excitation current to a test battery is a sum of equal amplitude sin waves of a few frequencies spread over range of interest. The time profile of this signal has duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known, synchronous detection processes the time record and each component, both magnitude and phase, is obtained. For compensation, the components, except the one of interest, are reassembled in the time domain. The resulting signal is subtracted from the original signal and the component of interest is synchronously detected. This process is repeated for each component.
Method of Detecting System Function by Measuring Frequency Response
NASA Technical Reports Server (NTRS)
Morrison, John L. (Inventor); Morrison, William H. (Inventor)
2008-01-01
Real time battery impedance spectrum is acquired using one time record, Compensated Synchronous Detection (CSD). This parallel method enables battery diagnostics. The excitation current to a test battery is a sum of equal amplitude sin waves of a few frequencies spread over range of interest. The time profile of this signal has duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known, synchronous detection processes the time record and each component, both magnitude and phase, is obtained. For compensation, the components, except the one of interest, are reassembled in the time domain. The resulting signal is subtracted from the original signal and the component of interest is synchronously detected. This process is repeated for each component.
The Secret List of Dos and Don'ts for Filmmaking
NASA Astrophysics Data System (ADS)
Kramer, N.
2012-12-01
Science is a massive black box to billions of people who walk the streets. However, the process of filmmaking can be equally as mystifying. As with the development of many scientific experiments, the process starts on a napkin at a restaurant…but then what? The road to scientific publication is propelled by a canonical list of several dos and don't that fit most situations. An equally useful list exists for up-and-coming producers. The list streamlines efforts, optimizes your use of the tools at your fingertips and enhances impact. Many fundamentals can be learned from books, but during this talk we will project and discuss several examples of best practices, from honing a story, to identifying audience appeal, filming, editing and the secrets of inexpensively acquiring expert help. Whether your goal is a two-minute webisode or a 90 minute documentary, these time-tested practices, with a little awareness, can give life to your films.;
ERIC Educational Resources Information Center
Cobb, Kitty B., Ed.; Conwell, Catherine R., Ed.
The purpose of the EQUALS programs is to increase the interest and awareness that females and minorities have concerning mathematics and science related careers. This book, produced by an EQUALS program in North Carolina, contains 35 hands-on, discovery science activities that center around four EQUALS processes--problem solving, cooperative…
Where Public Health Meets Human Rights
Kiragu, Karusa; Sawicki, Olga; Smith, Sally; Brion, Sophie; Sharma, Aditi; Mworeko, Lilian; Iovita, Alexandrina
2017-01-01
Abstract In 2014, the World Health Organization (WHO) initiated a process for validation of the elimination of mother-to-child transmission (EMTCT) of HIV and syphilis by countries. For the first time in such a process for the validation of disease elimination, WHO introduced norms and approaches that are grounded in human rights, gender equality, and community engagement. This human rights-based validation process can serve as a key opportunity to enhance accountability for human rights protection by evaluating EMTCT programs against human rights norms and standards, including in relation to gender equality and by ensuring the provision of discrimination-free quality services. The rights-based validation process also involves the assessment of participation of affected communities in EMTCT program development, implementation, and monitoring and evaluation. It brings awareness to the types of human rights abuses and inequalities faced by women living with, at risk of, or affected by HIV and syphilis, and commits governments to eliminate those barriers. This process demonstrates the importance and feasibility of integrating human rights, gender, and community into key public health interventions in a manner that improves health outcomes, legitimizes the participation of affected communities, and advances the human rights of women living with HIV. PMID:29302179
40 CFR 63.2855 - How do I determine the quantity of oilseed processed?
Code of Federal Regulations, 2010 CFR
2010-07-01
... oilseed measurements must be determined on an as received basis, as defined in § 63.2872. The as received... accounting month rather than a calendar month basis, and you have 12 complete accounting months of approximately equal duration in a calendar year, you may substitute the accounting month time interval for the...
Using plant functional traits to restore Hawaiian rainforest
Rebecca Ostertag; Laura Warman; Susan Cordell; Peter M. Vitousek
2015-01-01
Ecosystem restoration efforts are carried out by a variety of individuals and organizations with an equally varied set of goals, priorities, resources and time-scales. Once restoration of a degraded landscape or community is recognized as necessary, choosing which species to include in a restoration programme can be a difficult and value-laden process (Fry, Power &...
1975-10-01
DC anodizing all adhesion values were lower but almost equal. 36 mnamnminmh TABU X SWOT OF EFFECT OF CURRaTT DEÄITT, TIME ABD SEAUK OF CHJOOC...Continuum Interpretation for Fracture and Adhesion", J. Appl . Polymer Science, 1^, 29 (I969) 3. Williams, M. L., "Stress Singularities, Adhesion, and
Bjorgan, Asgeir; Randeberg, Lise Lyngsnes
2015-01-01
Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717
Glass transition dynamics of stacked thin polymer films
NASA Astrophysics Data System (ADS)
Fukao, Koji; Terasawa, Takehide; Oda, Yuto; Nakamura, Kenji; Tahara, Daisuke
2011-10-01
The glass transition dynamics of stacked thin films of polystyrene and poly(2-chlorostyrene) were investigated using differential scanning calorimetry and dielectric relaxation spectroscopy. The glass transition temperature Tg of as-stacked thin polystyrene films has a strong depression from that of the bulk samples. However, after annealing at high temperatures above Tg, the stacked thin films exhibit glass transition at a temperature almost equal to the Tg of the bulk system. The α-process dynamics of stacked thin films of poly(2-chlorostyrene) show a time evolution from single-thin-film-like dynamics to bulk-like dynamics during the isothermal annealing process. The relaxation rate of the α process becomes smaller with increase in the annealing time. The time scale for the evolution of the α dynamics during the annealing process is very long compared with that for the reptation dynamics. At the same time, the temperature dependence of the relaxation time for the α process changes from Arrhenius-like to Vogel-Fulcher-Tammann dependence with increase of the annealing time. The fragility index increases and the distribution of the α-relaxation times becomes smaller with increase in the annealing time for isothermal annealing. The observed change in the α process is discussed with respect to the interfacial interaction between the thin layers of stacked thin polymer films.
78 FR 26031 - Sunshine Act Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-03
... EQUAL EMPLOYMENT OPPORTUNITY COMMISSION Sunshine Act Notice AGENCY HOLDING THE MEETING: Equal Employment Opportunity Commission DATE AND TIME: Wednesday, May 8, 2013, 9:00 a.m. Eastern Time. PLACE.... Announcement of Notation Votes, and 2. Wellness Programs Under Federal Equal Employment Opportunity Laws. Note...
Hybrid time-frequency domain equalization for LED nonlinearity mitigation in OFDM-based VLC systems.
Li, Jianfeng; Huang, Zhitong; Liu, Xiaoshuang; Ji, Yuefeng
2015-01-12
A novel hybrid time-frequency domain equalization scheme is proposed and experimentally demonstrated to mitigate the white light emitting diode (LED) nonlinearity in visible light communication (VLC) systems based on orthogonal frequency division multiplexing (OFDM). We handle the linear and nonlinear distortion separately in a nonlinear OFDM system. The linear part is equalized in frequency domain and the nonlinear part is compensated by an adaptive nonlinear time domain equalizer (N-TDE). The experimental results show that with only a small number of parameters the nonlinear equalizer can efficiently mitigate the LED nonlinearity. With the N-TDE the modulation index (MI) and BER performance can be significantly enhanced.
Optoelectronic Reservoir Computing
Paquot, Y.; Duport, F.; Smerieri, A.; Dambre, J.; Schrauwen, B.; Haelterman, M.; Massar, S.
2012-01-01
Reservoir computing is a recently introduced, highly efficient bio-inspired approach for processing time dependent data. The basic scheme of reservoir computing consists of a non linear recurrent dynamical system coupled to a single input layer and a single output layer. Within these constraints many implementations are possible. Here we report an optoelectronic implementation of reservoir computing based on a recently proposed architecture consisting of a single non linear node and a delay line. Our implementation is sufficiently fast for real time information processing. We illustrate its performance on tasks of practical importance such as nonlinear channel equalization and speech recognition, and obtain results comparable to state of the art digital implementations. PMID:22371825
Nakamura, Moriya; Kamio, Yukiyoshi; Miyazaki, Tetsuya
2010-01-01
We experimentally demonstrate linewidth-tolerant real-time 40-Gbit/s(10-Gsymbol/s) 16-quadrature amplitude modulation. We achieved bit-error rates of <10(-9) using an external-cavity laser diode with a linewidth of 200 kHz and <10(-7) using a distributed-feedback laser diode with a linewidth of 30 MHz, thanks to the phase-noise canceling capability provided by self-homodyne detection using a pilot carrier. Pre-equalization based on digital signal processing was employed to suppress intersymbol interference caused by the limited-frequency bandwidth of electrical components.
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Rich structure in the correlation matrix spectra in non-equilibrium steady states
NASA Astrophysics Data System (ADS)
Biswas, Soham; Leyvraz, Francois; Monroy Castillero, Paulino; Seligman, Thomas H.
2017-01-01
It has been shown that, if a model displays long-range (power-law) spatial correlations, its equal-time correlation matrix will also have a power law tail in the distribution of its high-lying eigenvalues. The purpose of this paper is to show that the converse is generally incorrect: a power-law tail in the high-lying eigenvalues of the correlation matrix may exist even in the absence of equal-time power law correlations in the initial model. We may therefore view the study of the eigenvalue distribution of the correlation matrix as a more powerful tool than the study of spatial Correlations, one which may in fact uncover structure, that would otherwise not be apparent. Specifically, we show that in the Totally Asymmetric Simple Exclusion Process, whereas there are no clearly visible correlations in the steady state, the eigenvalues of its correlation matrix exhibit a rich structure which we describe in detail.
Rich structure in the correlation matrix spectra in non-equilibrium steady states.
Biswas, Soham; Leyvraz, Francois; Monroy Castillero, Paulino; Seligman, Thomas H
2017-01-17
It has been shown that, if a model displays long-range (power-law) spatial correlations, its equal-time correlation matrix will also have a power law tail in the distribution of its high-lying eigenvalues. The purpose of this paper is to show that the converse is generally incorrect: a power-law tail in the high-lying eigenvalues of the correlation matrix may exist even in the absence of equal-time power law correlations in the initial model. We may therefore view the study of the eigenvalue distribution of the correlation matrix as a more powerful tool than the study of spatial Correlations, one which may in fact uncover structure, that would otherwise not be apparent. Specifically, we show that in the Totally Asymmetric Simple Exclusion Process, whereas there are no clearly visible correlations in the steady state, the eigenvalues of its correlation matrix exhibit a rich structure which we describe in detail.
Islam, Nazmul; Ghosh, Dulal C
2012-01-01
Electrophilicity is an intrinsic property of atoms and molecules. It probably originates logistically with the involvement in the physical process of electrostatics of soaked charge in electronic shells and the screened nuclear charge of atoms. Motivated by the existing view of conceptual density functional theory that similar to electronegativity and hardness equalization, there should be a physical process of equalization of electrophilicity during the chemical process of formation of hetero nuclear molecules, we have developed a new theoretical scheme and formula for evaluating the electrophilicity of hetero nuclear molecules. A comparative study with available bench marking reveals that the hypothesis of electrophilicity and equalization, and the present method of evaluating equalized electrophilicity, are scientifically promising.
Islam, Nazmul; Ghosh, Dulal C.
2012-01-01
Electrophilicity is an intrinsic property of atoms and molecules. It probably originates logistically with the involvement in the physical process of electrostatics of soaked charge in electronic shells and the screened nuclear charge of atoms. Motivated by the existing view of conceptual density functional theory that similar to electronegativity and hardness equalization, there should be a physical process of equalization of electrophilicity during the chemical process of formation of hetero nuclear molecules, we have developed a new theoretical scheme and formula for evaluating the electrophilicity of hetero nuclear molecules. A comparative study with available bench marking reveals that the hypothesis of electrophilicity and equalization, and the present method of evaluating equalized electrophilicity, are scientifically promising. PMID:22408445
12 CFR 268.103 - Complaints of discrimination covered by this part.
Code of Federal Regulations, 2010 CFR
2010-01-01
... disability), or the Equal Pay Act (sex-based wage discrimination) shall be processed in accordance with this... for employment. (c) This part does not apply to Equal Pay Act complaints of employees whose services... OF THE FEDERAL RESERVE SYSTEM RULES REGARDING EQUAL OPPORTUNITY Board Program To Promote Equal...
Desirable limits of accelerative forces in a space-based materials processing facility
NASA Technical Reports Server (NTRS)
Naumann, Robert J.
1990-01-01
There are three categories of accelerations to be encountered on orbiting spacecraft: (1) quasi-steady accelerations, caused by atmospheric drag or by gravity gradients, 10(exp -6) to 10(exp -7) g sub o; (2) transient accelerations, caused by movements of the astronauts, mass translocations, landing and departure of other spacecraft, etc.; and (3) oscillary accelerations, caused by running machinery (fans, pumps, generators). Steady accelerations cause continuing displacements; transients cause time-limited displacements. The important aspect is the area under the acceleration curve, measured over a certain time interval. Note that this quantity is not equivalent to a velocity because of friction effects. Transient motions are probably less important than steady accelerations because they only produce constant displacements. If the accelerative forces were not equal and opposite, the displacement would increase with time. A steady acceleration will produce an increasing velocity of a particle, but eventually an equilibrium value will be reached where drag and acceleration forces are equal. From then on, the velocity will remain constant, and the displacement will increase linearly with time.
A time series model: First-order integer-valued autoregressive (INAR(1))
NASA Astrophysics Data System (ADS)
Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.
2017-07-01
Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.
Rau, Anne K; Moll, Kristina; Snowling, Margaret J; Landerl, Karin
2015-02-01
The current study investigated the time course of cross-linguistic differences in word recognition. We recorded eye movements of German and English children and adults while reading closely matched sentences, each including a target word manipulated for length and frequency. Results showed differential word recognition processes for both developing and skilled readers. Children of the two orthographies did not differ in terms of total word processing time, but this equal outcome was achieved quite differently. Whereas German children relied on small-unit processing early in word recognition, English children applied small-unit decoding only upon rereading-possibly when experiencing difficulties in integrating an unfamiliar word into the sentence context. Rather unexpectedly, cross-linguistic differences were also found in adults in that English adults showed longer processing times than German adults for nonwords. Thus, although orthographic consistency does play a major role in reading development, cross-linguistic differences are detectable even in skilled adult readers. Copyright © 2014 Elsevier Inc. All rights reserved.
Hassanein, Naziha M.; Abd El-Hay Ibrahim, Hussein; Abd El-Baky, Doaa H.
2017-01-01
The ability of dead cells of endophytic Drechslera hawaiiensis of Morus alba L. grown in heavy metals habitats for bioremoval of cadmium (Cd2+), copper (Cu2+), and lead (Pb2+) in aqueous solution was evaluated under different conditions. Whereas the highest extent of Cd2+ and Cu2+ removal and uptake occurred at pH 8 as well as Pb2+ occurred at neutral pH (6–7) after equilibrium time 10 min. Initial concentration 30 mg/L of Cd2+ for 10 min contact time and 50 to 90 mg/L of Pb2+ and Cu2+ supported the highest biosorption after optimal contact time of 30 min achieved with biomass dose equal to 5 mg of dried died biomass of D. hawaiiensis. The maximum removal of Cd2+, Cu2+, and Pb2+ equal to 100%, 100%, and 99.6% with uptake capacity estimated to be 0.28, 2.33, and 9.63 mg/g from real industrial wastewater, respectively were achieved within 3 hr contact time at pH 7.0, 7.0, and 6.0, respectively by using the dead biomass of D. hawaiiensis compared to 94.7%, 98%, and 99.26% removal with uptake equal to 0.264, 2.3, and 9.58 mg/g of Cd2+, Cu2+, and Pb2+, respectively with the living cells of the strain under the same conditions. The biosorbent was analyzed by Fourier Transformer Infrared Spectroscopy (FT-IR) analysis to identify the various functional groups contributing in the sorption process. From FT-IR spectra analysis, hydroxyl and amides were the major functional groups contributed in biosorption process. It was concluded that endophytic D. hawaiiensis biomass can be used potentially as biosorbent for removing Cd2+, Cu2+, and Pb2+ in aqueous solutions. PMID:28781539
A complex valued radial basis function network for equalization of fast time varying channels.
Gan, Q; Saratchandran, P; Sundararajan, N; Subramanian, K R
1999-01-01
This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity.
Yoganandan, Narayan; Arun, Mike W J; Humm, John; Pintar, Frank A
2014-10-01
The first objective of the study was to determine the thorax and abdomen deflection time corridors using the equal stress equal velocity approach from oblique side impact sled tests with postmortem human surrogates fitted with chestbands. The second purpose of the study was to generate deflection time corridors using impulse momentum methods and determine which of these methods best suits the data. An anthropometry-specific load wall was used. Individual surrogate responses were normalized to standard midsize male anthropometry. Corridors from the equal stress equal velocity approach were very similar to those from impulse momentum methods, thus either method can be used for this data. Present mean and plus/minus one standard deviation abdomen and thorax deflection time corridors can be used to evaluate dummies and validate complex human body finite element models.
Ocean Variability Effects on Underwater Acoustic Communications
2011-09-01
schemes for accessing wide frequency bands. Compared with OFDM schemes, the multiband MIMO transmission combined with time reversal processing...systems, or multiple- input/multiple-output ( MIMO ) systems, decision feedback equalization and interference cancellation schemes have been integrated...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 2 MIMO receiver also iterates channel estimation and symbol demodulation with
A motion deblurring method with long/short exposure image pairs
NASA Astrophysics Data System (ADS)
Cui, Guangmang; Hua, Weiping; Zhao, Jufeng; Gong, Xiaoli; Zhu, Liyao
2018-01-01
In this paper, a motion deblurring method with long/short exposure image pairs is presented. The long/short exposure image pairs are captured for the same scene under different exposure time. The image pairs are treated as the input of the deblurring method and more information could be used to obtain a deblurring result with high image quality. Firstly, the luminance equalization process is carried out to the short exposure image. And the blur kernel is estimated with the image pair under the maximum a posteriori (MAP) framework using conjugate gradient algorithm. Then a L0 image smoothing based denoising method is applied to the luminance equalized image. And the final deblurring result is obtained with the gain controlled residual image deconvolution process with the edge map as the gain map. Furthermore, a real experimental optical system is built to capture the image pair in order to demonstrate the effectiveness of the proposed deblurring framework. The long/short image pairs are obtained under different exposure time and camera gain control. Experimental results show that the proposed method could provide a superior deblurring result in both subjective and objective assessment compared with other deblurring approaches.
The spatial unmasking of speech: evidence for within-channel processing of interaural time delay.
Edmonds, Barrie A; Culling, John F
2005-05-01
Across-frequency processing by common interaural time delay (ITD) in spatial unmasking was investigated by measuring speech reception thresholds (SRTs) for high- and low-frequency bands of target speech presented against concurrent speech or a noise masker. Experiment 1 indicated that presenting one of these target bands with an ITD of +500 micros and the other with zero ITD (like the masker) provided some release from masking, but full binaural advantage was only measured when both target bands were given an ITD of + 500 micros. Experiment 2 showed that full binaural advantage could also be achieved when the high- and low-frequency bands were presented with ITDs of equal but opposite magnitude (+/- 500 micros). In experiment 3, the masker was also split into high- and low-frequency bands with ITDs of equal but opposite magnitude (+/-500 micros). The ITD of the low-frequency target band matched that of the high-frequency masking band and vice versa. SRTs indicated that, as long as the target and masker differed in ITD within each frequency band, full binaural advantage could be achieved. These results suggest that the mechanism underlying spatial unmasking exploits differences in ITD independently within each frequency channel.
Recurrence plots of discrete-time Gaussian stochastic processes
NASA Astrophysics Data System (ADS)
Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick
2016-09-01
We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.
Functional correlation approach to operational risk in banking organizations
NASA Astrophysics Data System (ADS)
Kühn, Reimer; Neu, Peter
2003-05-01
A Value-at-Risk-based model is proposed to compute the adequate equity capital necessary to cover potential losses due to operational risks, such as human and system process failures, in banking organizations. Exploring the analogy to a lattice gas model from physics, correlations between sequential failures are modeled by as functionally defined, heterogeneous couplings between mutually supportive processes. In contrast to traditional risk models for market and credit risk, where correlations are described as equal-time-correlations by a covariance matrix, the dynamics of the model shows collective phenomena such as bursts and avalanches of process failures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.
2005-08-01
The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less
Bañez, Lionel L; Terris, Martha K; Aronson, William J; Presti, Joseph C; Kane, Christopher J; Amling, Christopher L; Freedland, Stephen J
2009-04-01
African American men with prostate cancer are at higher risk for cancer-specific death than Caucasian men. We determine whether significant delays in management contribute to this disparity. We hypothesize that in an equal-access health care system, time interval from diagnosis to treatment would not differ by race. We identified 1,532 African American and Caucasian men who underwent radical prostatectomy (RP) from 1988 to 2007 at one of four Veterans Affairs Medical Centers that comprise the Shared Equal-Access Regional Cancer Hospital (SEARCH) database with known biopsy date. We compared time from biopsy to RP between racial groups using linear regression adjusting for demographic and clinical variables. We analyzed risk of potential clinically relevant delays by determining odds of delays >90 and >180 days. Median time interval from diagnosis to RP was 76 and 68 days for African Americans and Caucasian men, respectively (P = 0.004). After controlling for demographic and clinical variables, race was not associated with the time interval between diagnosis and RP (P = 0.09). Furthermore, race was not associated with increased risk of delays >90 (P = 0.45) or >180 days (P = 0.31). In a cohort of men undergoing RP in an equal-access setting, there was no significant difference between racial groups with regard to time interval from diagnosis to RP. Thus, equal-access includes equal timely access to the operating room. Given our previous finding of poorer outcomes among African Americans, treatment delays do not seem to explain these observations. Our findings need to be confirmed in patients electing other treatment modalities and in other practice settings.
Liu, Weihua; Yang, Yi; Wang, Shuqing; Liu, Yang
2014-01-01
Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful.
Forder, Lewis; He, Xun; Franklin, Anna
2017-01-01
Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing.
He, Xun; Franklin, Anna
2017-01-01
Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing. PMID:28542426
2015-01-01
1964 (Title VII) and the Pregnancy Discrimination Act amendment to Title VII, the Equal Pay Act of 1963, the Age Discrimi- nation in Employment Act of...Act of 1964 (Title VII) and the Pregnancy Discrimination Act amendment to Title VII, the Equal Pay Act of 1963, the Age Discrimination in...EEO programs uti - lize training on the EEO complaint process and framing of claims and that they use more-structured investigation requests
Code of Federal Regulations, 2011 CFR
2011-07-01
... hatches, sampling ports, and gauge wells provided that each opening is closed when not in use. Examples of... uncontrolled organic HAP emissions from the sum of all process vents are greater than or equal to 0.15 Mg/yr... vents are greater than or equal to 6.8 Mg/yr. Group 2 process vent means any process vent that does not...
Code of Federal Regulations, 2010 CFR
2010-07-01
... hatches, sampling ports, and gauge wells provided that each opening is closed when not in use. Examples of... uncontrolled organic HAP emissions from the sum of all process vents are greater than or equal to 0.15 Mg/yr... vents are greater than or equal to 6.8 Mg/yr. Group 2 process vent means any process vent that does not...
Process for forming a chromium diffusion portion and articles made therefrom
Helmick, David Andrew; Cavanaugh, Dennis William; Feng, Ganjiang; Bucci, David Vincent
2012-09-11
In one embodiment, a method for forming an article with a diffusion portion comprises: forming a slurry comprising chromium and silicon, applying the slurry to the article, and heating the article to a sufficient temperature and for a sufficient period of time to diffuse chromium and silicon into the article and form a diffusion portion comprising silicon and a microstructure comprising .alpha.-chromium. In one embodiment, a gas turbine component comprises: a superalloy and a diffusion portion having a depth of less than or equal to 60 .mu.m measured from the superalloy surface into the gas turbine component. The diffusion portion has a diffusion surface having a microstructure comprising greater than or equal to 40% by volume .alpha.-chromium.
Kismödi, Eszter; Kiragu, Karusa; Sawicki, Olga; Smith, Sally; Brion, Sophie; Sharma, Aditi; Mworeko, Lilian; Iovita, Alexandrina
2017-12-01
In 2014, the World Health Organization (WHO) initiated a process for validation of the elimination of mother-to-child transmission (EMTCT) of HIV and syphilis by countries. For the first time in such a process for the validation of disease elimination, WHO introduced norms and approaches that are grounded in human rights, gender equality, and community engagement. This human rights-based validation process can serve as a key opportunity to enhance accountability for human rights protection by evaluating EMTCT programs against human rights norms and standards, including in relation to gender equality and by ensuring the provision of discrimination-free quality services. The rights-based validation process also involves the assessment of participation of affected communities in EMTCT program development, implementation, and monitoring and evaluation. It brings awareness to the types of human rights abuses and inequalities faced by women living with, at risk of, or affected by HIV and syphilis, and commits governments to eliminate those barriers. This process demonstrates the importance and feasibility of integrating human rights, gender, and community into key public health interventions in a manner that improves health outcomes, legitimizes the participation of affected communities, and advances the human rights of women living with HIV.
Quality control process improvement of flexible printed circuit board by FMEA
NASA Astrophysics Data System (ADS)
Krasaephol, Siwaporn; Chutima, Parames
2018-02-01
This research focuses on the quality control process improvement of Flexible Printed Circuit Board (FPCB), centred around model 7-Flex, by using Failure Mode and Effect Analysis (FMEA) method to decrease proportion of defective finished goods that are found at the final inspection process. Due to a number of defective units that were found at the final inspection process, high scraps may be escaped to customers. The problem comes from poor quality control process which is not efficient enough to filter defective products from in-process because there is no In-Process Quality Control (IPQC) or sampling inspection in the process. Therefore, the quality control process has to be improved by setting inspection gates and IPCQs at critical processes in order to filter the defective products. The critical processes are analysed by the FMEA method. IPQC is used for detecting defective products and reducing chances of defective finished goods escaped to the customers. Reducing proportion of defective finished goods also decreases scrap cost because finished goods incur higher scrap cost than work in-process. Moreover, defective products that are found during process can reflect the abnormal processes; therefore, engineers and operators should timely solve the problems. Improved quality control was implemented for 7-Flex production lines from July 2017 to September 2017. The result shows decreasing of the average proportion of defective finished goods and the average of Customer Manufacturers Lot Reject Rate (%LRR of CMs) equal to 4.5% and 4.1% respectively. Furthermore, cost saving of this quality control process equals to 100K Baht.
X-ray beam equalization for digital fluoroscopy
NASA Astrophysics Data System (ADS)
Molloi, Sabee Y.; Tang, Jerry; Marcin, Martin R.; Zhou, Yifang; Anvar, Behzad
1996-04-01
The concept of radiographic equalization has previously been investigated. However, a suitable technique for digital fluoroscopic applications has not been developed. The previously reported scanning equalization techniques cannot be applied to fluoroscopic applications due to their exposure time limitations. On the other hand, area beam equalization techniques are more suited for digital fluoroscopic applications. The purpose of this study is to develop an x- ray beam equalization technique for digital fluoroscopic applications that will produce an equalized radiograph with minimal image artifacts and tube loading. Preliminary unequalized images of a humanoid chest phantom were acquired using a digital fluoroscopic system. Using this preliminary image as a guide, an 8 by 8 array of square pistons were used to generate masks in a mold with CeO2. The CeO2 attenuator thicknesses were calculated using the gray level information from the unequalized image. The generated mask was positioned close to the focal spot (magnification of 8.0) in order to minimize edge artifacts from the mask. The masks were generated manually in order to investigate the piston and matrix size requirements. The development of an automated version of mask generation and positioning is in progress. The results of manual mask generation and positioning show that it is possible to generate equalized radiographs with minimal perceptible artifacts. The equalization of x-ray transmission across the field exiting from the object significantly improved the image quality by preserving local contrast throughout the image. Furthermore, the reduction in dynamic range significantly reduced the effect of x-ray scatter and veiling glare from high transmission to low transmission areas. Also, the x-ray tube loading due to the mask assembly itself was negligible. In conclusion it is possible to produce area beam compensation that will be compatible with digital fluoroscopy with minimal compensation artifacts. The compensation process produces an image with equalized signal to noise ratio in all parts of the image.
A cost-effective line-based light-balancing technique using adaptive processing.
Hsia, Shih-Chang; Chen, Ming-Huei; Chen, Yu-Min
2006-09-01
The camera imaging system has been widely used; however, the displaying image appears to have an unequal light distribution. This paper presents novel light-balancing techniques to compensate uneven illumination based on adaptive signal processing. For text image processing, first, we estimate the background level and then process each pixel with nonuniform gain. This algorithm can balance the light distribution while keeping a high contrast in the image. For graph image processing, the adaptive section control using piecewise nonlinear gain is proposed to equalize the histogram. Simulations show that the performance of light balance is better than the other methods. Moreover, we employ line-based processing to efficiently reduce the memory requirement and the computational cost to make it applicable in real-time systems.
Sung, Jaeyoung
2007-07-01
We present an exact theoretical test of Jarzynski's equality (JE) for reversible volume-switching processes of an ideal gas system. The exact analysis shows that the prediction of JE for the free energy difference is the same as the work done on the gas system during the reversible process that is dependent on the shape of path of the reversible volume-switching process.
Community understandings of and responses to gender equality and empowerment in Rakai, Uganda.
Mullinax, Margo; Higgins, Jenny; Wagman, Jennifer; Nakyanjo, Neema; Kigozi, Godfrey; Serwadda, David; Wawer, Maria; Gray, Ronald; Nalugoda, Fred
2013-01-01
Women's rights and gender empowerment programmes are now part of the international agenda for improving global public health, the benefits of which are well documented. However, the public health community has, yet, to address how people define and understand gender equality and how they enact the process of empowerment in their lives. This study uses safe homes and respect for everyone (SHARE), an anti-violence intervention in rural Rakai, Uganda, as a case study to investigate perceptions of gender equality. Investigators analysed 12 focus groups of adult women and men to explore how macro-level concepts of gender equality are being processed on an interpersonal level and the effects on health outcomes. Respondents generally agreed that women lack basic rights. However, they also expressed widespread disagreement about the meanings of gender equality, and reported difficulties integrating the concepts of gender equality into their interpersonal relationships. Community members reported that equality, with the resulting shift in gender norms, could expose women to adverse consequences such as violence, infidelity and abandonment with increased sexual health risks, and potential adverse effects on education. Efforts to increase women's rights must occur in conjunction with community-based work on understandings of gender equality.
NASA Astrophysics Data System (ADS)
Gu, Cunchang; Mu, Yundong
2013-03-01
In this paper, we consider a single machine on-line scheduling problem with the special chains precedence and delivery time. All jobs arrive over time. The chains chainsi arrive at time ri , it is known that the processing and delivery time of each job on the chain satisfy one special condition CD a forehand: if the job J(i)j is the predecessor of the job J(i)k on the chain chaini, then they satisfy p(i)j = p(i)k = p >= qj >= qk , i = 1,2, ---,n , where pj and qj denote the processing time and the delivery time of the job Jj respectively. Obviously, if the arrival jobs have no chains precedence, it shows that the length of the corresponding chain is 1. The objective is to minimize the time by which all jobs have been delivered. We provide an on-line algorithm with a competitive ratio of √2 , and the result is the best possible.
ERIC Educational Resources Information Center
Child, Barbara
1975-01-01
In State v. Koome, the Washington Supreme Court has striken that state's statute regarding parental consent for a minor's abortion. Implications of the finding for a minor's right to due process, equal protection, and privacy are discussed. (LBH)
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Processing. 7.35 Section 7.35 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development EQUAL EMPLOYMENT OPPORTUNITY; POLICY, PROCEDURES AND PROGRAMS Equal Employment Opportunity Without Regard to Race...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 1 2013-04-01 2013-04-01 false Processing. 7.35 Section 7.35 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development EQUAL EMPLOYMENT OPPORTUNITY; POLICY, PROCEDURES AND PROGRAMS Equal Employment Opportunity Without Regard to Race...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 1 2014-04-01 2014-04-01 false Processing. 7.35 Section 7.35 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development EQUAL EMPLOYMENT OPPORTUNITY; POLICY, PROCEDURES AND PROGRAMS Equal Employment Opportunity Without Regard to Race...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Processing. 7.35 Section 7.35 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development EQUAL EMPLOYMENT OPPORTUNITY; POLICY, PROCEDURES AND PROGRAMS Equal Employment Opportunity Without Regard to Race...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Processing. 7.35 Section 7.35 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development EQUAL EMPLOYMENT OPPORTUNITY; POLICY, PROCEDURES AND PROGRAMS Equal Employment Opportunity Without Regard to Race...
ERIC Educational Resources Information Center
Social Policy, 1976
1976-01-01
Priorities on the agenda include fair representation and participation in the political process, equal education and training, meaningful work and adequate compensation, equal access to economic power, adequate housing, physical safety, and fair treatment by and equal access to media and the arts. (Author/AM)
Adaptive Filter Design Using Type-2 Fuzzy Cerebellar Model Articulation Controller.
Lin, Chih-Min; Yang, Ming-Shu; Chao, Fei; Hu, Xiao-Min; Zhang, Jun
2016-10-01
This paper aims to propose an efficient network and applies it as an adaptive filter for the signal processing problems. An adaptive filter is proposed using a novel interval type-2 fuzzy cerebellar model articulation controller (T2FCMAC). The T2FCMAC realizes an interval type-2 fuzzy logic system based on the structure of the CMAC. Due to the better ability of handling uncertainties, type-2 fuzzy sets can solve some complicated problems with outstanding effectiveness than type-1 fuzzy sets. In addition, the Lyapunov function is utilized to derive the conditions of the adaptive learning rates, so that the convergence of the filtering error can be guaranteed. In order to demonstrate the performance of the proposed adaptive T2FCMAC filter, it is tested in signal processing applications, including a nonlinear channel equalization system, a time-varying channel equalization system, and an adaptive noise cancellation system. The advantages of the proposed filter over the other adaptive filters are verified through simulations.
Radio wavelength observations of magnetic fields on active dwarf M, RS CVn and magnetic stars
NASA Technical Reports Server (NTRS)
Lang, Kenneth R.
1986-01-01
The dwarf M stars, YZ Canis Minoris and AD Leonis, exhibit narrow-band, slowly varying (hours) microwave emission that cannot be explained by conventional thermal radiation mechanisms. The dwarf M stars, AD Leonis and Wolf 424, emit rapid spikes whose high brightness temperatures similarly require a nonthermal radiation process. They are attributed to coherent mechanisms such as an electron-cyclotron maser or coherent plasma radiation. If the electron-cyclotron maser emits at the second or third harmonic gyrofrequency, the coronal magnetic field strength equals 250 G or 167 G, and constraints on the plasma frequency imply an electron density of 6 x 10 to the 9th/cu cm. Radio spikes from AD Leonis and Wolf 424 have rise times less than or equal to 5 ms, indicating a linear size of less than or equal to 1.5 x 10 to the 8th cm, or less than 0.005 of the stellar radius. Although Ap magnetic stars have strong dipole magnetic fields, they exhibit no detectable gyroresonant radiation, suggesting that these stars do not have hot, dense coronae. The binary RS CVn star UX Arietis exhibits variable emission at 6 cm wavelength on time scales ranging from 30 s to more than one hour.
Binary gaseous mixture and single component adsorption of methane and argon on exfoliated graphite
NASA Astrophysics Data System (ADS)
Russell, Brice Adam
Exfoliated graphite was used as a substrate for adsorption of argon and methane. Adsorption experiments were conducted for both equal parts mixtures of argon and methane and for each gas species independently. The purpose of this was to compare mixture adsorption to single component adsorption and to investigate theoretical predictions concerning the kinetics of adsorption made by Burde and Calbi.6 In particular, time to reach pressure equilibrium of a single dose at a constant temperature for the equal parts mixture was compared to time of adsorption for each species by itself. It was shown that mixture adsorption is a much more complex and time consuming process than single component adsorption and requires a much longer amount of time to reach equilibrium. Information about the composition evolution of the mixture during the times when pressure was going toward equilibrium was obtained using a quadrupole mass spectrometer. Evidence for initial higher rate of adsorption for the weaker binding energy species (argon) was found as well as overall composition change which clearly indicated a higher coverage of methane on the graphite sample by the time equilibration was reached. Effective specific surface area of graphite for both argon and methane was also determined using the Point-B method.2
A real-time architecture for time-aware agents.
Prouskas, Konstantinos-Vassileios; Pitt, Jeremy V
2004-06-01
This paper describes the specification and implementation of a new three-layer time-aware agent architecture. This architecture is designed for applications and environments where societies of humans and agents play equally active roles, but interact and operate in completely different time frames. The architecture consists of three layers: the April real-time run-time (ART) layer, the time aware layer (TAL), and the application agents layer (AAL). The ART layer forms the underlying real-time agent platform. An original online, real-time, dynamic priority-based scheduling algorithm is described for scheduling the computation time of agent processes, and it is shown that the algorithm's O(n) complexity and scalable performance are sufficient for application in real-time domains. The TAL layer forms an abstraction layer through which human and agent interactions are temporally unified, that is, handled in a common way irrespective of their temporal representation and scale. A novel O(n2) interaction scheduling algorithm is described for predicting and guaranteeing interactions' initiation and completion times. The time-aware predicting component of a workflow management system is also presented as an instance of the AAL layer. The described time-aware architecture addresses two key challenges in enabling agents to be effectively configured and applied in environments where humans and agents play equally active roles. It provides flexibility and adaptability in its real-time mechanisms while placing them under direct agent control, and it temporally unifies human and agent interactions.
Loeffler, Jonna; Raab, Markus; Cañal-Bruland, Rouwen
2017-09-01
Embodied cognition frameworks suggest a direct link between sensorimotor experience and cognitive representations of concepts ( Shapiro, 2011 ). We examined whether this holds also true for concepts that cannot be directly perceived with the sensorimotor system (i.e., temporal concepts). To test this, participants learned object-space (Exp. 1) or object-time (Exp. 2) associations. Afterwards, participants were asked to assign the objects to their location in space/time meanwhile they walked backward, forward, or stood on a treadmill. We hypothesized that walking backward should facilitate the online processing of "behind"/"past"-related stimuli, but hinder the processing of "ahead"/"future"-related stimuli, and a reversed effect for forward walking. Indeed, "ahead"- and "future"-related stimuli were processed slower during backward walking. During forward walking and standing, stimuli were processed equally fast. The results provide partial evidence for the activation of specific spatial and temporal concepts by whole-body movements and are discussed in the context of movement familiarity.
Microstructural evolution of bainitic steel severely deformed by equal channel angular pressing.
Nili-Ahmadabadi, M; Haji Akbari, F; Rad, F; Karimi, Z; Iranpour, M; Poorganji, B; Furuhara, T
2010-09-01
High Si bainitic steel has been received much of interest because of combined ultra high strength, good ductility along with high wear resistance. In this study a high Si bainitic steel (Fe-0.22C-2.0Si-3.0Mn) was used with a proper microstructure which could endure severe plastic deformation. In order to study the effect of severe plastic deformation on the microstructure and properties of bainitic steel, Equal Channel Angular Pressing was performed in two passes at room temperature. Optical, SEM and TEM microscopies were used to examine the microstructure of specimens before and after Equal Channel Angular Pressing processing. X-ray diffraction was used to measure retained austenite after austempering and Equal Channel Angular Pressing processing. It can be seen that retained austenite picks had removed after Equal Channel Angular Pressing which could attributed to the transformation of austenite to martensite during severe plastic deformation. Enhancement of hardness values by number of Equal Channel Angular Pressing confirms this idea.
NASA Astrophysics Data System (ADS)
Neri, Izaak; Roldán, Édgar; Jülicher, Frank
2017-01-01
We study the statistics of infima, stopping times, and passage probabilities of entropy production in nonequilibrium steady states, and we show that they are universal. We consider two examples of stopping times: first-passage times of entropy production and waiting times of stochastic processes, which are the times when a system reaches a given state for the first time. Our main results are as follows: (i) The distribution of the global infimum of entropy production is exponential with mean equal to minus Boltzmann's constant; (ii) we find exact expressions for the passage probabilities of entropy production; (iii) we derive a fluctuation theorem for stopping-time distributions of entropy production. These results have interesting implications for stochastic processes that can be discussed in simple colloidal systems and in active molecular processes. In particular, we show that the timing and statistics of discrete chemical transitions of molecular processes, such as the steps of molecular motors, are governed by the statistics of entropy production. We also show that the extreme-value statistics of active molecular processes are governed by entropy production; for example, we derive a relation between the maximal excursion of a molecular motor against the direction of an external force and the infimum of the corresponding entropy-production fluctuations. Using this relation, we make predictions for the distribution of the maximum backtrack depth of RNA polymerases, which follow from our universal results for entropy-production infima.
Cybulski, Olgierd; Babin, Volodymyr; Hołyst, Robert
2004-01-01
We analyze the Fleming-Viot process. The system is confined in a box, whose boundaries act as a sink of Brownian particles. The death rate at the boundaries is matched by the branching (birth) rate in the system and thus the number of particles is kept constant. We show that such a process is described by the Renyi entropy whose production is minimized in the stationary state. The entropy production in this process is a monotonically decreasing function of time irrespective of the initial conditions. The first Laplacian eigenvalue is shown to be equal to the Renyi entropy production in the stationary state. As an example we simulate the process in a two-dimensional box.
Practical Unitary Simulator for Non-Markovian Complex Processes
NASA Astrophysics Data System (ADS)
Binder, Felix C.; Thompson, Jayne; Gu, Mile
2018-06-01
Stochastic processes are as ubiquitous throughout the quantitative sciences as they are notorious for being difficult to simulate and predict. In this Letter, we propose a unitary quantum simulator for discrete-time stochastic processes which requires less internal memory than any classical analogue throughout the simulation. The simulator's internal memory requirements equal those of the best previous quantum models. However, in contrast to previous models, it only requires a (small) finite-dimensional Hilbert space. Moreover, since the simulator operates unitarily throughout, it avoids any unnecessary information loss. We provide a stepwise construction for simulators for a large class of stochastic processes hence directly opening the possibility for experimental implementations with current platforms for quantum computation. The results are illustrated for an example process.
Excursion Processes Associated with Elliptic Combinatorics
NASA Astrophysics Data System (ADS)
Baba, Hiroya; Katori, Makoto
2018-06-01
Researching elliptic analogues for equalities and formulas is a new trend in enumerative combinatorics which has followed the previous trend of studying q-analogues. Recently Schlosser proposed a lattice path model in the square lattice with a family of totally elliptic weight-functions including several complex parameters and discussed an elliptic extension of the binomial theorem. In the present paper, we introduce a family of discrete-time excursion processes on Z starting from the origin and returning to the origin in a given time duration 2 T associated with Schlosser's elliptic combinatorics. The processes are inhomogeneous both in space and time and hence expected to provide new models in non-equilibrium statistical mechanics. By numerical calculation we show that the maximum likelihood trajectories on the spatio-temporal plane of the elliptic excursion processes and of their reduced trigonometric versions are not straight lines in general but are nontrivially curved depending on parameters. We analyze asymptotic probability laws in the long-term limit T → ∞ for a simplified trigonometric version of excursion process. Emergence of nontrivial curves of trajectories in a large scale of space and time from the elementary elliptic weight-functions exhibits a new aspect of elliptic combinatorics.
Excursion Processes Associated with Elliptic Combinatorics
NASA Astrophysics Data System (ADS)
Baba, Hiroya; Katori, Makoto
2018-04-01
Researching elliptic analogues for equalities and formulas is a new trend in enumerative combinatorics which has followed the previous trend of studying q-analogues. Recently Schlosser proposed a lattice path model in the square lattice with a family of totally elliptic weight-functions including several complex parameters and discussed an elliptic extension of the binomial theorem. In the present paper, we introduce a family of discrete-time excursion processes on Z starting from the origin and returning to the origin in a given time duration 2T associated with Schlosser's elliptic combinatorics. The processes are inhomogeneous both in space and time and hence expected to provide new models in non-equilibrium statistical mechanics. By numerical calculation we show that the maximum likelihood trajectories on the spatio-temporal plane of the elliptic excursion processes and of their reduced trigonometric versions are not straight lines in general but are nontrivially curved depending on parameters. We analyze asymptotic probability laws in the long-term limit T → ∞ for a simplified trigonometric version of excursion process. Emergence of nontrivial curves of trajectories in a large scale of space and time from the elementary elliptic weight-functions exhibits a new aspect of elliptic combinatorics.
Low Luminosity States of the Black Hole Candidate GX 339-4. 2; Timing Analysis
NASA Technical Reports Server (NTRS)
Nowak, Michael A.; Wilms, Joern; Dove, James B.
1999-01-01
Here we present timing analysis of a set of eight Rossi X-ray Timing Explorer (RXTE) observations of the black hole candidate GX 339-4 that were taken during its hard/low state. On long time scales, the RXTE All Sky Monitor data reveal evidence of a 240 day periodicity, comparable to timescales expected from warped, precessing accretion disks. On short timescales all observations save one show evidence of a persistent f(qpo approximately equals 0.3 Hz quasi-periodic oscillations (QPO)). The broad band (10 (exp -3) to 10 (exp2) Hz) power appears to be dominated by two independent processes that can be modeled as very broad Lorentzians with Q approximately less than - 1. The coherence function between soft and hard photon variability shows that if these are truly independent processes, then they are individually coherent, but they are incoherent with one another. This is evidenced by the fact that the coherence function between the hard and soft variability is near unity between 5 x 10 (exp -3) but shows evidence of a dip at f approximately equals 1 Hz. This is the region of overlap between the broad Lorentzian fits to the Power Spectral Density (PSD). Similar to Cyg X-1, the coherence also drops dramatically at frequencies approximately greater than 1O Hz. Also similar to Cyg X-1, the hard photon variability is seen to lag the soft photon variability with the lag time increasing with decreasing Fourier frequency. The magnitude of this time lag appears to be positively correlated with the flux of GX 339-4. We discuss all of these observations in light of current theoretical models of both black hole spectra and temporal variability.
ERIC Educational Resources Information Center
DeCesare, Tony
2016-01-01
One of Amy Gutmann's important achievements in "Democratic Education" is her development of a "democratic interpretation of equal educational opportunity." This standard of equality demands that "all educable children learn enough to participate effectively in the democratic process." In other words, Gutmann demands…
Minimizing the area required for time constants in integrated circuits
NASA Technical Reports Server (NTRS)
Lyons, J. C.
1972-01-01
When a medium- or large-scale integrated circuit is designed, efforts are usually made to avoid the use of resistor-capacitor time constant generators. The capacitor needed for this circuit usually takes up more surface area on the chip than several resistors and transistors. When the use of this network is unavoidable, the designer usually makes an effort to see that the choice of resistor and capacitor combinations is such that a minimum amount of surface area is consumed. The optimum ratio of resistance to capacitance that will result in this minimum area is equal to the ratio of resistance to capacitance which may be obtained from a unit of surface area for the particular process being used. The minimum area required is a function of the square root of the reciprocal of the products of the resistance and capacitance per unit area. This minimum occurs when the area required by the resistor is equal to the area required by the capacitor.
Time management for preclinical safety professionals.
Wells, Monique Y
2010-08-01
A survey about time management in the workplace was distributed to obtain a sense of the level of job satisfaction among preclinical safety professionals in the current economic climate, and to encourage reflection upon how we manage time in our work environment. Roughly equal numbers of respondents (approximately 32%) identified themselves as management or staff, and approximately 27% indicated that they are consultants. Though 45.2% of respondents indicated that time management is very challenging for the profession in general, only 36.7% find it very challenging for themselves. Ten percent of respondents view time management to be exceedingly challenging for themselves. Approximately 34% of respondents indicated that prioritization of tasks was the most challenging aspect of time management for them. Focusing on an individual task was the second most challenging aspect (26%), followed equally by procrastination and delegation of tasks (12.4%). Almost equal numbers of respondents said that they would (35.2%) or might (33.3%) undertake training to improve their time management skills. Almost equal numbers of participants responded "perhaps" (44.6%) or "yes" (44.2%) to the question of whether management personnel should be trained in time management.
Israel's Gender Equality Policy in Education: Revolution or Containment?
ERIC Educational Resources Information Center
Eden, Devorah
2000-01-01
Examines Israel's policy of gender equality in education, discussing: social and economic forces that created the demand for equality; political processes for implementing the policy; and policy content. Data from interviews and document reviews indicate that the policy was devised to address concerns of high-tech industries and women,…
Parallel constraint satisfaction in memory-based decisions.
Glöckner, Andreas; Hodges, Sara D
2011-01-01
Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.
Process for forming a chromium diffusion portion and articles made therefrom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmick, David Andrew; Cavanaugh, Dennis William; Feng, Ganjiang
In one embodiment, a method for forming an article with a diffusion portion comprises: forming a slurry comprising chromium and silicon, applying the slurry to the article, and heating the article to a sufficient temperature and for a sufficient period of time to diffuse chromium and silicon into the article and form a diffusion portion comprising silicon and a microstructure comprising .alpha.-chromium. In one embodiment, a gas turbine component comprises: a superalloy and a diffusion portion having a depth of less than or equal to 60 .mu.m measured from the superalloy surface into the gas turbine component. The diffusion portionmore » has a diffusion surface having a microstructure comprising greater than or equal to 40% by volume .alpha.-chromium.« less
Musical rhythm and reading development: does beat processing matter?
Ozernov-Palchik, Ola; Patel, Aniruddh D
2018-05-20
There is mounting evidence for links between musical rhythm processing and reading-related cognitive skills, such as phonological awareness. This may be because music and speech are rhythmic: both involve processing complex sound sequences with systematic patterns of timing, accent, and grouping. Yet, there is a salient difference between musical and speech rhythm: musical rhythm is often beat-based (based on an underlying grid of equal time intervals), while speech rhythm is not. Thus, the role of beat-based processing in the reading-rhythm relationship is not clear. Is there is a distinct relation between beat-based processing mechanisms and reading-related language skills, or is the rhythm-reading link entirely due to shared mechanisms for processing nonbeat-based aspects of temporal structure? We discuss recent evidence for a distinct link between beat-based processing and early reading abilities in young children, and suggest experimental designs that would allow one to further methodically investigate this relationship. We propose that beat-based processing taps into a listener's ability to use rich contextual regularities to form predictions, a skill important for reading development. © 2018 New York Academy of Sciences.
Kastelein, Ronald A; Wensveen, Paul J; Terhune, John M; de Jong, Christ A F
2011-01-01
Equal-loudness functions describe relationships between the frequencies of sounds and their perceived loudness. This pilot study investigated the possibility of deriving equal-loudness contours based on the assumption that sounds of equal perceived loudness elicit equal reaction times (RTs). During a psychoacoustic underwater hearing study, the responses of two young female harbor seals to tonal signals between 0.125 and 100 kHz were filmed. Frame-by-frame analysis was used to quantify RT (the time between the onset of the sound stimulus and the onset of movement of the seal away from the listening station). Near-threshold equal-latency contours, as surrogates for equal-loudness contours, were estimated from RT-level functions fitted to mean RT data. The closer the received sound pressure level was to the 50% detection hearing threshold, the more slowly the animals reacted to the signal (RT range: 188-982 ms). Equal-latency contours were calculated relative to the RTs shown by each seal at sound levels of 0, 10, and 20 dB above the detection threshold at 1 kHz. Fifty percent detection thresholds are obtained with well-trained subjects actively listening for faint familiar sounds. When calculating audibility ranges of sounds for harbor seals in nature, it may be appropriate to consider levels 20 dB above this threshold.
Yang, Yi; Wang, Shuqing; Liu, Yang
2014-01-01
Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful. PMID:25276851
Computational complexity of ecological and evolutionary spatial dynamics
Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A.
2015-01-01
There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569
The Procedural Queer: Substantive Due Process, "Lawrence v. Texas," and Queer Rhetorical Futures
ERIC Educational Resources Information Center
Campbell, Peter Odell
2012-01-01
This essay discusses Justice Anthony M. Kennedy's choice to foreground arguments from due process rather than equal protection in the majority opinion in Lawrence v. Texas. Kennedy's choice can realize constitutional legal doctrine that is more consistent with radical queer politics than arguments from equal protection. Unlike some recent…
Desegregation in a Former "Whites Only" School in South Africa
ERIC Educational Resources Information Center
Grootboom, Nomalanga P.
2012-01-01
After decades of racially segregated education under apartheid in South Africa, the process of school desegregation commenced in 1990's with the view equalize education for all, and fostering better relationships and making available equal opportunities for all learners. The process of desegregation not has been without problems as it is apparent…
Two-stage energy storage equalization system for lithium-ion battery pack
NASA Astrophysics Data System (ADS)
Chen, W.; Yang, Z. X.; Dong, G. Q.; Li, Y. B.; He, Q. Y.
2017-11-01
How to raise the efficiency of energy storage and maximize storage capacity is a core problem in current energy storage management. For that, two-stage energy storage equalization system which contains two-stage equalization topology and control strategy based on a symmetric multi-winding transformer and DC-DC (direct current-direct current) converter is proposed with bidirectional active equalization theory, in order to realize the objectives of consistent lithium-ion battery packs voltages and cells voltages inside packs by using a method of the Range. Modeling analysis demonstrates that the voltage dispersion of lithium-ion battery packs and cells inside packs can be kept within 2 percent during charging and discharging. Equalization time was 0.5 ms, which shortened equalization time of 33.3 percent compared with DC-DC converter. Therefore, the proposed two-stage lithium-ion battery equalization system can achieve maximum storage capacity between lithium-ion battery packs and cells inside packs, meanwhile efficiency of energy storage is significantly improved.
Fast-Response-Time Shape-Memory-Effect Foam Actuators
NASA Technical Reports Server (NTRS)
Jardine, Peter
2010-01-01
Bulk shape memory alloys, such as Nitinol or CuAlZn, display strong recovery forces undergoing a phase transformation after being strained in their martensitic state. These recovery forces are used for actuation. As the phase transformation is thermally driven, the response time of the actuation can be slow, as the heat must be passively inserted or removed from the alloy. Shape memory alloy TiNi torque tubes have been investigated for at least 20 years and have demonstrated high actuation forces [3,000 in.-lb (approximately equal to 340 N-m) torques] and are very lightweight. However, they are not easy to attach to existing structures. Adhesives will fail in shear at low-torque loads and the TiNi is not weldable, so that mechanical crimp fits have been generally used. These are not reliable, especially in vibratory environments. The TiNi is also slow to heat up, as it can only be heated indirectly using heater and cooling must be done passively. This has restricted their use to on-off actuators where cycle times of approximately one minute is acceptable. Self-propagating high-temperature synthesis (SHS) has been used in the past to make porous TiNi metal foams. Shape Change Technologies has been able to train SHS derived TiNi to exhibit the shape memory effect. As it is an open-celled material, fast response times were observed when the material was heated using hot and cold fluids. A methodology was developed to make the open-celled porous TiNi foams as a tube with integrated hexagonal ends, which then becomes a torsional actuator with fast response times. Under processing developed independently, researchers were able to verify torques of 84 in.-lb (approximately equal to 9.5 Nm) using an actuator weighing 1.3 oz (approximately equal to 37 g) with very fast (less than 1/16th of a second) initial response times when hot and cold fluids were used to facilitate heat transfer. Integrated structural connections were added as part of the net shape process, eliminating the need for welding, adhesives, or mechanical crimping. Inexpensive net-shape processing was used, which reduces the cost of the actuator by over a factor of 10 over nonporous TiNi made by hot drawing of tube or electrical discharge machining. By forming the alloy as an open-celled foam, the surface area for heat transfer is dramatically increased, allowing for much faster response times. The technology also allows for netshape fabrication of the actuator, which allows for structural connections to be integrated into the actuator material, making these actuators significantly less expensive. Commercial applications include actuators for concepts such as the variable area chevron and nozzle in jet aircraft. Lightweight tube or rod components can be supplied to interested parties.
D’Aquila, Laura A.; Desloge, Joseph G.; Braida, Louis D.
2017-01-01
The masking release (MR; i.e., better speech recognition in fluctuating compared with continuous noise backgrounds) that is evident for listeners with normal hearing (NH) is generally reduced or absent for listeners with sensorineural hearing impairment (HI). In this study, a real-time signal-processing technique was developed to improve MR in listeners with HI and offer insight into the mechanisms influencing the size of MR. This technique compares short-term and long-term estimates of energy, increases the level of short-term segments whose energy is below the average energy, and normalizes the overall energy of the processed signal to be equivalent to that of the original long-term estimate. This signal-processing algorithm was used to create two types of energy-equalized (EEQ) signals: EEQ1, which operated on the wideband speech plus noise signal, and EEQ4, which operated independently on each of four bands with equal logarithmic width. Consonant identification was tested in backgrounds of continuous and various types of fluctuating speech-shaped Gaussian noise including those with both regularly and irregularly spaced temporal fluctuations. Listeners with HI achieved similar scores for EEQ and the original (unprocessed) stimuli in continuous-noise backgrounds, while superior performance was obtained for the EEQ signals in fluctuating background noises that had regular temporal gaps but not for those with irregularly spaced fluctuations. Thus, in noise backgrounds with regularly spaced temporal fluctuations, the energy-normalized signals led to larger values of MR and higher intelligibility than obtained with unprocessed signals. PMID:28602128
The Kuznets process in Malaysia.
Randolph, S
1990-10-01
This study looks at how the Kuznets process, the structural determinants of the aggregate inequality trend during the course of economic development, is transpiring in Malaysia. A time-series test of Kuznets's hypothesis concerning the trend in participation income in the course of economic growth and its underlying structural components is conducted using data from the Malaysian Family Life Survey. The study covers the period 1968-76 during which the equalizing phase of growth was expected to take hold. Analysis determined that while many of the underlying processes which Kuznets speculated combined to generate the aggregate trend in participation income are at work in Malaysia, others are either absent or their phasing has been altered. The equalizing phase in the course of development has been delayed in arriving. Inequality in the nonagricultural sector exceeded that in the agricultural sector, and the wage gap which opened during the early phase of development declined with further development. These findings conform with Kuznets's expectations. Available time-series evidence from other currently developing countries suggests that inequality is typically higher in the nonagricultural sector during the early phase of development and that an increasing and subsequently decreasing between-sector wage gap is a broadly shared experience. This study's findings also support Kuznets's expectation that inequality within the agricultural sector can worsen in the face of dualistic agricultural development. Finally, Malaysia's trend in inequality within the nonagricultural sector exerted the greatest influence upon the aggregate trend in inequality per Kuznets's hypothesis.
Method for enhancing signals transmitted over optical fibers
Ogle, James W.; Lyons, Peter B.
1983-01-01
A method for spectral equalization of high frequency spectrally broadband signals transmitted through an optical fiber. The broadband signal input is first dispersed by a grating. Narrow spectral components are collected into an array of equalizing fibers. The fibers serve as optical delay lines compensating for material dispersion of each spectral component during transmission. The relative lengths of the individual equalizing fibers are selected to compensate for such prior dispersion. The output of the equalizing fibers couple the spectrally equalized light onto a suitable detector for subsequent electronic processing of the enhanced broadband signal.
Thermal energy storage for industrial waste heat recovery
NASA Technical Reports Server (NTRS)
Hoffman, H. W.; Kedl, R. J.; Duscha, R. A.
1978-01-01
The potential is examined for waste heat recovery and reuse through thermal energy storage in five specific industrial categories: (1) primary aluminum, (2) cement, (3) food processing, (4) paper and pulp, and (5) iron and steel. Preliminary results from Phase 1 feasibility studies suggest energy savings through fossil fuel displacement approaching 0.1 quad/yr in the 1985 period. Early implementation of recovery technologies with minimal development appears likely in the food processing and paper and pulp industries; development of the other three categories, though equally desirable, will probably require a greater investment in time and dollars.
NASA Technical Reports Server (NTRS)
Binns, W. R.; Fernandez, J. I.; Israel, M. H.; Klarmann, J.; Maehl, R. C.; Mewaldt, R. A.
1974-01-01
Results are presented on the chemical composition of VVH cosmic rays from a series of six high-altitude balloon flights of a large-area, high-resolution electronic detector. The charge composition in the 32 less than or equal to Z less than or equal to 45 interval is found to be inconsistent with S-process nucleosynthesis. The energy spectrum of particles with Z greater than or equal to 32 between 600 and 1500 MeV/N at the top of the atmosphere is measured and is found to be consistent with the 25 less than or equal to Z less than or equal to 27 group within experimental error.
Resource depletion promotes automatic processing: implications for distribution of practice.
Scheel, Matthew H
2010-12-01
Recent models of cognition include two processing systems: an automatic system that relies on associative learning, intuition, and heuristics, and a controlled system that relies on deliberate consideration. Automatic processing requires fewer resources and is more likely when resources are depleted. This study showed that prolonged practice on a resource-depleting mental arithmetic task promoted automatic processing on a subsequent problem-solving task, as evidenced by faster responding and more errors. Distribution of practice effects (0, 60, 120, or 180 sec. between problems) on rigidity also disappeared when groups had equal time on resource-depleting tasks. These results suggest that distribution of practice effects is reducible to resource availability. The discussion includes implications for interpreting discrepancies in the traditional distribution of practice effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyajima, Yoji, E-mail: miyajima.y.ab@m.titech.ac.jp; Okubo, Satoshi; Abe, Hiroki
The dislocation density of pure copper fabricated by two severe plastic deformation (SPD) processes, i.e., accumulative roll bonding and equal-channel angular pressing, was evaluated using scanning transmission electron microscopy/transmission electron microscopy observations. The dislocation density drastically increased from ~ 10{sup 13} m{sup −} {sup 2} to about 5 × 10{sup 14} m{sup −} {sup 2}, and then saturated, for both SPD processes.
RNA folding kinetics using Monte Carlo and Gillespie algorithms.
Clote, Peter; Bayegan, Amir H
2018-04-01
RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .
ERIC Educational Resources Information Center
Lewis, Donald Marion
1979-01-01
Demonstrates the role the guarantee of due process can play in ensuring that vital interests in public education not be lost through erroneous assessments of a student's proficiency in basic skills, and describes the limits constitutional and statutory guarantees of equal educational opportunity place on the use of competency testing. (Author/IRT)
ERIC Educational Resources Information Center
Webb, Derwin L.
1997-01-01
Participation in sports, in some instances, is considered a right which grants students the opportunity to be involved in extracurricular activities. Discusses the potential violation of home-schooled students' constitutional due process and equal protection rights and the pertinent laws regarding students and their ability to participate in…
Moving in time: Bayesian causal inference explains movement coordination to auditory beats
Elliott, Mark T.; Wing, Alan M.; Welchman, Andrew E.
2014-01-01
Many everyday skilled actions depend on moving in time with signals that are embedded in complex auditory streams (e.g. musical performance, dancing or simply holding a conversation). Such behaviour is apparently effortless; however, it is not known how humans combine auditory signals to support movement production and coordination. Here, we test how participants synchronize their movements when there are potentially conflicting auditory targets to guide their actions. Participants tapped their fingers in time with two simultaneously presented metronomes of equal tempo, but differing in phase and temporal regularity. Synchronization therefore depended on integrating the two timing cues into a single-event estimate or treating the cues as independent and thereby selecting one signal over the other. We show that a Bayesian inference process explains the situations in which participants choose to integrate or separate signals, and predicts motor timing errors. Simulations of this causal inference process demonstrate that this model provides a better description of the data than other plausible models. Our findings suggest that humans exploit a Bayesian inference process to control movement timing in situations where the origin of auditory signals needs to be resolved. PMID:24850915
Time-Resolved Electronic Relaxation Processes in Self-Organized Quantum Dots
2005-05-16
in a quantum dot infrared photodetector ,” paper CthM11, presented at CLEO, Baltimore, 2003. K. Kim, T. Norris, J. Singh, P. Bhattacharya...nanostructures have been equally spectacular. Following the development of quantum-well infrared photodetectors in the late 1980’s and early 90’s...4]. The quantum cascade laser is of course the best known of the new devices, as it constitutes an entirely new concept in semiconductor laser
Stochastic demography and the neutral substitution rate in class-structured populations.
Lehmann, Laurent
2014-05-01
The neutral rate of allelic substitution is analyzed for a class-structured population subject to a stationary stochastic demographic process. The substitution rate is shown to be generally equal to the effective mutation rate, and under overlapping generations it can be expressed as the effective mutation rate in newborns when measured in units of average generation time. With uniform mutation rate across classes the substitution rate reduces to the mutation rate.
Conditioning from an information processing perspective.
Gallistel, C R.
2003-04-28
The framework provided by Claude Shannon's [Bell Syst. Technol. J. 27 (1948) 623] theory of information leads to a quantitatively oriented reconceptualization of the processes that mediate conditioning. The focus shifts from processes set in motion by individual events to processes sensitive to the information carried by the flow of events. The conception of what properties of the conditioned and unconditioned stimuli are important shifts from the tangible properties to the intangible properties of number, duration, frequency and contingency. In this view, a stimulus becomes a CS if its onset substantially reduces the subject's uncertainty about the time of occurrence of the next US. One way to represent the subject's knowledge of that time of occurrence is by the cumulative probability function, which has two limiting forms: (1) The state of maximal uncertainty (minimal knowledge) is represented by the inverse exponential function for the random rate condition, in which the US is equally likely at any moment. (2) The limit to the subject's attainable certainty is represented by the cumulative normal function, whose momentary expectation is the CS-US latency minus the time elapsed since CS onset. Its standard deviation is the Weber fraction times the CS-US latency.
Dissociable contributions of motor-execution and action-observation to intramanual transfer.
Hayes, Spencer J; Elliott, Digby; Andrew, Matthew; Roberts, James W; Bennett, Simon J
2012-09-01
We examined the hypothesis that different processes and representations are associated with the learning of a movement sequence through motor-execution and action-observation. Following a pre-test in which participants attempted to achieve an absolute, and relative, time goal in a sequential goal-directed aiming movement, participants received either physical or observational practice with feedback. Post-test performance indicated that motor-execution and action-observation participants learned equally well. Participants then transferred to conditions where the gain between the limb movements and their visual consequences were manipulated. Under both bigger and smaller transfer conditions, motor-execution and action-observation participants exhibited similar intramanual transfer of absolute timing. However, participants in the action-observation group exhibited superior transfer of relative timing than the motor-execution group. These findings suggest that learning via action-observation is underpinned by a visual-spatial representation, while learning via motor-execution depends more on specific force-time planning (feed forward) and afferent processing associated with sensorimotor feedback. These behavioural effects are discussed with reference to neural processes associated with striatum, cerebellum and motor cortical regions (pre-motor cortex; SMA; pre-SMA).
Control of adaptive optic element displacement with the help of a magnetic rheology drive
NASA Astrophysics Data System (ADS)
Deulin, Eugeni A.; Mikhailov, Valeri P.; Sytchev, Victor V.
2000-10-01
The control system of adaptive optic of a large astronomical segmentated telescope was designed and tested. The dynamic model and the amplitude-frequency analysis of the new magnetic rheology (MR) drive are presented. The loop controlled drive consists of hydrostatic carrier, MR hydraulic loop controlling system, elastic thin wall seal, stainless seal which are united in a single three coordinate manipulator. This combination ensures short positioning error (delta) (phi)
ERIC Educational Resources Information Center
Liu, Jian
2012-01-01
This study extends the theoretical perspectives in policy studies on the issue of educational equality by analyzing the influence of cultural values on policies and policy processes. The present paper first teases out the key cultural values regarding education and equality, and then explores how these values shape the institution and policy…
Bayesian analysis of volcanic eruptions
NASA Astrophysics Data System (ADS)
Ho, Chih-Hsiang
1990-10-01
The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.
Method for enhancing signals transmitted over optical fibers
Ogle, J.W.; Lyons, P.B.
1981-02-11
A method for spectral equalization of high frequency spectrally broadband signals transmitted through an optical fiber is disclosed. The broadband signal input is first dispersed by a grating. Narrow spectral components are collected into an array of equalizing fibers. The fibers serve as optical delay lines compensating for material dispersion of each spectral component during transmission. The relative lengths of the individual equalizing fibers are selected to compensate for such prior dispersion. The output of the equalizing fibers couple the spectrally equalized light onto a suitable detector for subsequent electronic processing of the enhanced broadband signal.
NASA Astrophysics Data System (ADS)
Lu, Dianchen; Seadawy, Aly R.; Ali, Asghar
2018-06-01
The Equal-Width and Modified Equal-Width equations are used as a model in partial differential equations for the simulation of one-dimensional wave transmission in nonlinear media with dispersion processes. In this article we have employed extend simple equation method and the exp(-varphi(ξ)) expansion method to construct the exact traveling wave solutions of equal width and modified equal width equations. The obtained results are novel and have numerous applications in current areas of research in mathematical physics. It is exposed that our method, with the help of symbolic computation, provides a effective and powerful mathematical tool for solving different kind nonlinear wave problems.
Real-time stereo generation for surgical vision during minimal invasive robotic surgery
NASA Astrophysics Data System (ADS)
Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod
2016-03-01
This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.
Battery Cell Balancing Optimisation for Battery Management System
NASA Astrophysics Data System (ADS)
Yusof, M. S.; Toha, S. F.; Kamisan, N. A.; Hashim, N. N. W. N.; Abdullah, M. A.
2017-03-01
Battery cell balancing in every electrical component such as home electronic equipment and electric vehicle is very important to extend battery run time which is simplified known as battery life. The underlying solution to equalize the balance of cell voltage and SOC between the cells when they are in complete charge. In order to control and extend the battery life, the battery cell balancing is design and manipulated in such way as well as shorten the charging process. Active and passive cell balancing strategies as a unique hallmark enables the balancing of the battery with the excellent performances configuration so that the charging process will be faster. The experimental and simulation covers an analysis of how fast the battery can balance for certain time. The simulation based analysis is conducted to certify the use of optimisation in active or passive cell balancing to extend battery life for long periods of time.
Physical nature of longevity of light actinides in dynamic failure phenomenon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchaev, A. Ya., E-mail: uchaev@expd.vniief.ru; Punin, V. T.; Selchenkova, N. I.
It is shown in this work that the physical nature of the longevity of light actinides under extreme conditions in a range of nonequilibrium states of t ∼ 10{sup –6}–10{sup –10} s is determined by the time needed for the formation of a critical concentration of a cascade of failure centers, which changes connectivity of the body. These centers form a percolation cluster. The longevity is composed of waiting time t{sub w} for the appearance of failure centers and clusterization time t{sub c} of cascade of failure centers, when connectivity in the system of failure centers and the percolation clustermore » arise. A unique mechanism of the dynamic failure process, a unique order parameter, and an equal dimensionality of the space in which the process occurs determine the physical nature of the longevity of metals, including fissionable materials.« less
Thermal Microstructural Stability of AZ31 Magnesium after Severe Plastic Deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, John P.; Askari, Hesam A.; Hovanski, Yuri
2015-03-01
Both equal channel angular pressing and friction stir processing have the ability to refine the grain size of twin roll cast AZ31 magnesium and potentially improve its superplastic properties. This work used isochronal and isothermal heat treatments to investigate the microstructural stability of twin roll cast, equal channel angular pressed and friction stir processed AZ31 magnesium. For both heat treatment conditions, it was found that the twin roll casted and equal channel angular pressed materials were more stable than the friction stir processed material. Calculations of the grain growth kinetics showed that severe plastic deformation processing decreased the activation energymore » for grain boundary motion with the equal channel angular pressed material having the greatest Q value of the severely plastically deformed materials and that increasing the tool travel speed of the friction stir processed material improved microstructural stability. The Hollomon-Jaffe parameter was found to be an accurate means of identifying the annealing conditions that will result in substantial grain growth and loss of potential superplastic properties in the severely plastically deformed materials. In addition, Humphreys’s model of cellular microstructural stability accurately predicted the relative microstructural stability of the severely plastically deformed materials and with some modification, closely predicted the maximum grain size ratio achieved by the severely plastically deformed materials.« less
Pyrolysis process for the treatment of scrap tyres: preliminary experimental results.
Galvagno, S; Casu, S; Casabianca, T; Calabrese, A; Cornacchia, G
2002-01-01
The aim of this work is the evaluation, on a pilot scale, of scrap tyre pyrolysis process performance and the characteristics of the products under different process parameters, such as temperature, residence time, pressure, etc. In this frame, a series of tests were carried out at varying process temperatures between 550 and 680 degrees C, other parameters being equal. Pyrolysis plant process data are collected by an acquisition system; scrap tyre samples used for the treatment, solid and liquid by-products and produced syngas were analysed through both on-line monitoring (for gas) and laboratory analyses. Results show that process temperature, in the explored range, does not seem to seriously influence the volatilisation reaction yield, at least from a quantitative point of view, while it observably influences the distribution of the volatile fraction (liquid and gas) and by-products characteristics.
Test of a hypothesis of realism in quantum theory using a Bayesian approach
NASA Astrophysics Data System (ADS)
Nikitin, N.; Toms, K.
2017-05-01
In this paper we propose a time-independent equality and time-dependent inequality, suitable for an experimental test of the hypothesis of realism. The derivation of these relations is based on the concept of conditional probability and on Bayes' theorem in the framework of Kolmogorov's axiomatics of probability theory. The equality obtained is intrinsically different from the well-known Greenberger-Horne-Zeilinger (GHZ) equality and its variants, because violation of the proposed equality might be tested in experiments with only two microsystems in a maximally entangled Bell state |Ψ-> , while a test of the GHZ equality requires at least three quantum systems in a special state |ΨGHZ> . The obtained inequality differs from Bell's, Wigner's, and Leggett-Garg inequalities, because it deals with spin s =1 /2 projections onto only two nonparallel directions at two different moments of time, while a test of the Bell and Wigner inequalities requires at least three nonparallel directions, and a test of the Leggett-Garg inequalities requires at least three distinct moments of time. Hence, the proposed inequality seems to open an additional experimental possibility to avoid the "contextuality loophole." Violation of the proposed equality and inequality is illustrated with the behavior of a pair of anticorrelated spins in an external magnetic field and also with the oscillations of flavor-entangled pairs of neutral pseudoscalar mesons.
Acoustic MIMO communications in a very shallow water channel
NASA Astrophysics Data System (ADS)
Zhou, Yuehai; Cao, Xiuling; Tong, Feng
2015-12-01
Underwater acoustic channels pose significant difficulty for the development of high speed communication due to highly limited band-width as well as hostile multipath interference. Enlightened by rapid progress of multiple input multiple output (MIMO) technologies in wireless communication scenarios, MIMO systems offer a potential solution by enabling multiple spatially parallel communication channels to improve communication performance as well as capacity. For MIMO acoustic communications, deep sea channels offer substantial spatial diversity among multiple channels that can be exploited to address simultaneous multipath and co-channel interference. At the same time, there are increasing requirements for high speed underwater communication in very shallow water area (for example, a depth less than 10 m). In this paper, a space-time multichannel adaptive receiver consisting of multiple decision feedback equalizers (DFE) is adopted as the receiver for a very shallow water MIMO acoustic communication system. The performance of multichannel DFE receivers with relatively small number of receiving elements are analyzed and compared with that of the multichannel time reversal receiver to evaluate the impact of limited spatial diversity on multi-channel equalization and time reversal processing. The results of sea trials in a very shallow water channel are presented to demonstrate the feasibility of very shallow water MIMO acoustic communication.
Richardson-Klavehn, A; Gardiner, J M
1998-05-01
Depth-of-processing effects on incidental perceptual memory tests could reflect (a) contamination by voluntary retrieval, (b) sensitivity of involuntary retrieval to prior conceptual processing, or (c) a deficit in lexical processing during graphemic study tasks that affects involuntary retrieval. The authors devised an extension of incidental test methodology--making conjunctive predictions about response times as well as response proportions--to discriminate among these alternatives. They used graphemic, phonemic, and semantic study tasks, and a word-stem completion test with incidental, intentional, and inclusion instructions. Semantic study processing was superior to phonemic study processing in the intentional and inclusion tests, but semantic and phonemic study processing produced equal priming in the incidental test, showing that priming was uncontaminated by voluntary retrieval--a conclusion reinforced by the response-time data--and that priming was insensitive to prior conceptual processing. The incidental test nevertheless showed a priming deficit following graphemic study processing, supporting the lexical-processing hypothesis. Adding a lexical decision to the 3 study tasks eliminated the priming deficit following graphemic study processing, but did not influence priming following phonemic and semantic processing. The results provide the first clear evidence that depth-of-processing effects on perceptual priming can reflect lexical processes, rather than voluntary contamination or conceptual processes.
Ion Exchange Method - Diffusion Barrier Investigations
NASA Astrophysics Data System (ADS)
Pielak, G.; Szustakowski, M.; Kiezun, A.
1990-01-01
Ion exchange method is used to GRIN-rod lenses manufacturing. In this process the ion exchange occurs between bulk glass (rod) and a molten salt. It was find that diffusion barrier exists on a border of glass surface and molten salt. The investigations of this barrier show that it value varies with ion exchange time and process temperature. It was find that in the case when thalium glass rod was treated in KNO3, bath, the minimum of the potential after 24 h was in temperature of 407°C, after 48 h in 422°C, after 72 h in 438°C and so on. So there are the possibility to keep the minimum of diffusion barrier by changing the temperature of the process and then the effectiveness of ion exchange process is the most effective. The time needed to obtain suitable refractive index distribution in a process when temperature was linearly changed from 400°C to 460°C was shorter of about 30% compare with the process in which temperature was constant and equal 450°C.
NASA Technical Reports Server (NTRS)
Menga, G.
1975-01-01
An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.
Temporal processing deficit leads to impaired multisensory binding in schizophrenia.
Zvyagintsev, Mikhail; Parisi, Carmen; Mathiak, Klaus
2017-09-01
Schizophrenia has been characterised by neurodevelopmental dysconnectivity resulting in cognitive and perceptual dysmetria. Hence patients with schizophrenia may be impaired to detect the temporal relationship between stimuli in different sensory modalities. However, only a few studies described deficit in perception of temporally asynchronous multisensory stimuli in schizophrenia. We examined the perceptual bias and the processing time of synchronous and delayed sounds in the streaming-bouncing illusion in 16 patients with schizophrenia and a matched control group of 18 participants. Equal for patients and controls, the synchronous sound biased the percept of two moving squares towards bouncing as opposed to the more frequent streaming percept in the condition without sound. In healthy controls, a delay of the sound presentation significantly reduced the bias and led to prolonged processing time whereas patients with schizophrenia did not differentiate between this condition and the condition with synchronous sound. Schizophrenia leads to a prolonged window of simultaneity for audiovisual stimuli. Therefore, temporal processing deficit in schizophrenia can lead to hyperintegration of temporally unmatched multisensory stimuli.
Optimizing photo-Fenton like process for the removal of diesel fuel from the aqueous phase
2014-01-01
Background In recent years, pollution of soil and groundwater caused by fuel leakage from old underground storage tanks, oil extraction process, refineries, fuel distribution terminals, improper disposal and also spills during transferring has been reported. Diesel fuel has created many problems for water resources. The main objectives of this research were focused on assessing the feasibility of using photo-Fenton like method using nano zero-valent iron (nZVI/UV/H2O2) in removing total petroleum hydrocarbons (TPH) and determining the optimal conditions using Taguchi method. Results The influence of different parameters including the initial concentration of TPH (0.1-1 mg/L), H2O2 concentration (5-20 mmole/L), nZVI concentration (10-100 mg/L), pH (3-9), and reaction time (15-120 min) on TPH reduction rate in diesel fuel were investigated. The variance analysis suggests that the optimal conditions for TPH reduction rate from diesel fuel in the aqueous phase are as follows: the initial TPH concentration equals to 0.7 mg/L, nZVI concentration 20 mg/L, H2O2 concentration equals to 5 mmol/L, pH 3, and the reaction time of 60 min and degree of significance for the study parameters are 7.643, 9.33, 13.318, 15.185 and 6.588%, respectively. The predicted removal rate in the optimal conditions was 95.8% and confirmed by data obtained in this study which was between 95-100%. Conclusion In conclusion, photo-Fenton like process using nZVI process may enhance the rate of diesel degradation in polluted water and could be used as a pretreatment step for the biological removal of TPH from diesel fuel in the aqueous phase. PMID:24955242
Evaluating CMA equalization of SOQPSK-TG data for aeronautical telemetry
NASA Astrophysics Data System (ADS)
Cole-Rhodes, Arlene; KoneDossongui, Serge; Umuolo, Henry; Rice, Michael
2015-05-01
This paper presents the results of using a constant modulus algorithm (CMA) to recover shaped offset quadrature-phase shift keying (SOQPSK)-TG modulated data, which has been transmitted using the iNET data packet structure. This standard is defined and used for aeronautical telemetry. Based on the iNET-packet structure, the adaptive block processing CMA equalizer can be initialized using the minimum mean square error (MMSE) equalizer [3]. This CMA equalizer is being evaluated for use on iNET structured data, with initial tests being conducted on measured data which has been received in a controlled laboratory environment. Thus the CMA equalizer is applied at the receiver to data packets which have been experimentally generated in order to determine the feasibility of our equalization approach, and its performance is compared to that of the MMSE equalizer. Performance evaluation is based on computed bit error rate (BER) counts for these equalizers.
Regionally adaptive histogram equalization of the chest.
Sherrier, R H; Johnson, G A
1987-01-01
Advances in the area of digital chest radiography have resulted in the acquisition of high-quality images of the human chest. With these advances, there arises a genuine need for image processing algorithms specific to the chest, in order to fully exploit this digital technology. We have implemented the well-known technique of histogram equalization, noting the problems encountered when it is adapted to chest images. These problems have been successfully solved with our regionally adaptive histogram equalization method. With this technique histograms are calculated locally and then modified according to both the mean pixel value of that region as well as certain characteristics of the cumulative distribution function. This process, which has allowed certain regions of the chest radiograph to be enhanced differentially, may also have broader implications for other image processing tasks.
ERIC Educational Resources Information Center
Stern, Mark
2015-01-01
Over the past five years, marriage equality and charter schools have emerged at the forefront of political conversations about equality and rights. Some argue that these policies extend access to certain benefits and opportunities to historically oppressed communities, thus furthering liberalism and egalitarianism. In this article, I engage these…
2010-08-01
astigmatism and other sources, and stay constant from time to time (LC Technologies, 2000). Systematic errors can sometimes reach many degrees of visual angle...Taking the average of all disparities would mean treating each as equally important regardless of whether they are from correct or incorrect mappings. In...likely stop somewhere near the centroid because the large hM basically treats every point equally (or nearly equally if using the multivariate
A natural-color mapping for single-band night-time image based on FPGA
NASA Astrophysics Data System (ADS)
Wang, Yilun; Qian, Yunsheng
2018-01-01
A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.
78 FR 2397 - Sunshine Act Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-11
... EQUAL EMPLOYMENT OPPORTUNITY COMMISSION Sunshine Act Notice AGENCY HOLDING THE MEETING: Equal Employment Opportunity Commission. DATE AND TIME: Wednesday, January 16, 2013, 9:30 a.m. Eastern Time. PLACE: Commission Meeting Room on the First Floor of the EEOC Office Building, 131 ``M'' Street NE., Washington, DC...
78 FR 16501 - Sunshine Act Meeting Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-15
... EQUAL EMPLOYMENT OPPORTUNITY COMMISSION Sunshine Act Meeting Notice AGENCY: Equal Employment Opportunity Commission. DATE AND TIME: Wednesday, March 20, 2013, 9:50 a.m. Eastern Time. PLACE: Commission Meeting Room on the First Floor of the EEOC Office Building, 131 ``M'' Street NE., Washington, DC 20507...
76 FR 7562 - Sunshine Act Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-10
... EQUAL EMPLOYMENT OPPORTUNITY COMMISSION Sunshine Act Meeting AGENCY HOLDING THE MEETING: Equal Employment Opportunity Commission. DATE AND TIME: Wednesday, February 16, 2011, 9:30 a.m. Eastern Time. PLACE.... Announcement of Notation Votes, and 2. Out of work, out of luck? Denying employment opportunities to unemployed...
2016-05-01
and Kroeger (2002) provide details on sampling and weighting. Following the summary of the survey methodology is a description of the survey analysis... description of priority, for the ADDRESS file). At any given time, the current address used corresponded to the address number with the highest priority...types of address updates provided by the postal service. They are detailed below; each includes a description of the processing steps. 1. Postal Non
Optically phase-locked electronic speckle pattern interferometer
NASA Astrophysics Data System (ADS)
Moran, Steven E.; Law, Robert; Craig, Peter N.; Goldberg, Warren M.
1987-02-01
The design, theory, operation, and characteristics of an optically phase-locked electronic speckle pattern interferometer (OPL-ESPI) are described. The OPL-ESPI system couples an optical phase-locked loop with an ESPI system to generate real-time equal Doppler speckle contours of moving objects from unstable sensor platforms. In addition, the optical phase-locked loop provides the basis for a new ESPI video signal processing technique which incorporates local oscillator phase shifting coupled with video sequential frame subtraction.
Apparatuses and methods for laser reading of thermoluminescent phosphors
Braunlich, Peter F.; Tetzlaff, Wolfgang
1989-01-01
Apparatuses and methods for rapidly reading thermoluminescent phosphors to determine the amount of luminescent energy stored therein. The stored luminescent energy is interpreted as a measure of the total exposure of the thermoluminescent phosphor to ionizing radiation. The thermoluminescent phosphor reading apparatus uses a laser to generate a laser beam. The laser beam power level is monitored by a laser power detector and controlled to maintain the power level at a desired value or values which can vary with time. A shutter or other laser beam interrupting means is used to control exposure of the thermoluminescent phosphor to the laser beam. The laser beam can be equalized using an opitcal equalizer so that the laser beam has an approximately uniform power density across the beam. The heated thermoluminescent phosphor emits a visible or otherwise detectable luminescent emission which is measured as an indication of the radiation exposure of the thermoluminscent phosphors. Also disclosed are preferred signal processing and control circuits including one system using a digital computer. Also disclosed are time-profiled laser power cycles for pre-anneal, read and post-anneal treatment of phosphors.
Coherent perfect absorption in a homogeneously broadened two-level medium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Longhi, Stefano
2011-05-15
In recent works, it has been shown, rather generally, that the time-reversed process of lasing at threshold realizes a coherent perfect absorber (CPA). In a CPA, a lossy medium in an optical cavity with a specific degree of dissipation, equal in modulus to the gain of the lasing medium, can perfectly absorb coherent optical waves that are the time-reversed counterpart of the lasing field. Here, the time-reversed process of lasing is considered in detail for a homogeneously broadened two-level medium in an optical cavity and the conditions for CPA are derived. It is shown that, owing to the dispersive propertiesmore » of the two-level medium, exact time-reversal symmetry is broken and the frequency of the field at which CPA occurs is generally different than the one of the lasing mode. Moreover, at a large cooperation parameter, the observation of CPA in the presence of bistability requires one to operate in the upper branch of the hysteresis cycle.« less
Does a Computer Have an Arrow of Time?
NASA Astrophysics Data System (ADS)
Maroney, Owen J. E.
2010-02-01
Schulman (Entropy 7(4):221-233, 2005) has argued that Boltzmann’s intuition, that the psychological arrow of time is necessarily aligned with the thermodynamic arrow, is correct. Schulman gives an explicit physical mechanism for this connection, based on the brain being representable as a computer, together with certain thermodynamic properties of computational processes. Hawking (Physical Origins of Time Asymmetry, Cambridge University Press, Cambridge, 1994) presents similar, if briefer, arguments. The purpose of this paper is to critically examine the support for the link between thermodynamics and an arrow of time for computers. The principal arguments put forward by Schulman and Hawking will be shown to fail. It will be shown that any computational process that can take place in an entropy increasing universe, can equally take place in an entropy decreasing universe. This conclusion does not automatically imply a psychological arrow can run counter to the thermodynamic arrow. Some alternative possible explanations for the alignment of the two arrows will be briefly discussed.
Holographic thermalization and generalized Vaidya-AdS solutions in massive gravity
NASA Astrophysics Data System (ADS)
Hu, Ya-Peng; Zeng, Xiao-Xiong; Zhang, Hai-Qing
2017-02-01
We investigate the effect of massive graviton on the holographic thermalization process. Before doing this, we first find out the generalized Vaidya-AdS solutions in the de Rham-Gabadadze-Tolley (dRGT) massive gravity by directly solving the gravitational equations. Then, we study the thermodynamics of these Vaidya-AdS solutions by using the Misner-Sharp energy and unified first law, which also shows that the massive gravity is in a thermodynamic equilibrium state. Moreover, we adopt the two-point correlation function at equal time to explore the thermalization process in the dual field theory, and to see how the graviton mass parameter affects this process from the viewpoint of AdS/CFT correspondence. Our results show that the graviton mass parameter will increase the holographic thermalization process.
Adaptive histogram equalization in digital radiography of destructive skeletal lesions.
Braunstein, E M; Capek, P; Buckwalter, K; Bland, P; Meyer, C R
1988-03-01
Adaptive histogram equalization, an image-processing technique that distributes pixel values of an image uniformly throughout the gray scale, was applied to 28 plain radiographs of bone lesions, after they had been digitized. The non-equalized and equalized digital images were compared by two skeletal radiologists with respect to lesion margins, internal matrix, soft-tissue mass, cortical breakthrough, and periosteal reaction. Receiver operating characteristic (ROC) curves were constructed on the basis of the responses. Equalized images were superior to nonequalized images in determination of cortical breakthrough and presence or absence of periosteal reaction. ROC analysis showed no significant difference in determination of margins, matrix, or soft-tissue masses.
NASA Technical Reports Server (NTRS)
Ogallagher, J. J.
1973-01-01
A simple one-dimensional time-dependent diffusion-convection model for the modulation of cosmic rays is presented. This model predicts that the observed intensity at a given time is approximately equal to the intensity given by the time independent diffusion convection solution under interplanetary conditions which existed a time iota in the past, (U(t sub o) = U sub s(t sub o - tau)) where iota is the average time spent by a particle inside the modulating cavity. Delay times in excess of several hundred days are possible with reasonable modulation parameters. Interpretation of phase lags observed during the 1969 to 1970 solar maximum in terms of this model suggests that the modulating region is probably not less than 10 a.u. and maybe as much as 35 a.u. in extent.
Cosmetic wastewater treatment by coagulation and advanced oxidation processes.
Naumczyk, Jeremi; Bogacki, Jan; Marcinowski, Piotr; Kowalik, Paweł
2014-01-01
In this study, the treatment process of three cosmetic wastewater types has been investigated. Coagulation allowed to achieve chemical oxygen demand (COD) removal of 74.6%, 37.7% and 74.0% for samples A (Al2(SO4)3), B (Brentafloc F3) and C (PAX 16), respectively. The Fenton process proved to be effective as well - COD removal was equal to 75.1%, 44.7% and 68.1%, respectively. Coagulation with FeCl3 and the subsequent photo-Fenton process resulted in the best values of final COD removal equal to 92.4%, 62.8% and 90.2%. In case of the Fenton process, after coagulation these values were equal to 74.9%, 50.1% and 84.8%, while in case of the H2O2/UV process, the obtained COD removal was 83.8%, 36.2% and 80.9%. High value of COD removal in the Fenton process carried out for A and C wastewater samples was caused by a significant contribution of the final neutralization/coagulation. Very small effect of the oxidation reaction in the Fenton process in case of sample A resulting from the presence of antioxidants, 'OH radical scavengers' in the wastewater.
Cryptosporidium-contaminated water disinfection by a novel Fenton process.
Matavos-Aramyan, Sina; Moussavi, Mohsen; Matavos-Aramyan, Hedieh; Roozkhosh, Sara
2017-05-01
Three novel modified advanced oxidation process systems including ascorbic acid-, pro-oxidants- and ascorbic acid-pro-oxidants-modified Fenton system were utilized to study the disinfection efficiency on Cryptosporidium-contaminated drinking water samples. Different concentrations of divalent and trivalent iron ions, hydrogen peroxide, ascorbic acid and pro-oxidants at different exposure times were investigated. These novel systems were also compared to the classic Fenton system and to the control system which comprised of only hydrogen peroxide. The complete in vitro mechanism of the mentioned modified Fenton systems are also provided. The results pointed out that by considering the optimal parameter limitations, the ascorbic acid-modified Fenton system decreased the Cryptosporidium oocytes viability to 3.91%, while the pro-oxidant-modified and ascorbic acid-pro-oxidant-modified Fenton system achieved an oocytes viability equal to 1.66% and 0%, respectively. The efficiency of the classic Fenton at optimal condition was observed to be 20.12% of oocytes viability. The control system achieved 86.14% of oocytes viability. The optimum values of the operational parameters during this study are found to be 80mgL -1 for the divalent iron, 30mgL -1 for ascorbic acid, 30mmol for hydrogen peroxide, 25mgL -1 for pro-oxidants and an exposure time equal to 5min. The ascorbic acid-pro-oxidants-modified Fenton system achieved a promising complete water disinfection (0% viability) at the optimal conditions, leaving this method a feasible process for water disinfection or decontamination, even at industrial scales. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirschner, J.; Kerherve, G.; Winkler, C.
In this article, a novel time-of-flight spectrometer for two-electron-emission (e,2e/{gamma},2e) correlation spectroscopy from surfaces at low electron energies is presented. The spectrometer consists of electron optics that collect emitted electrons over a solid angle of approximately 1 sr and focus them onto a multichannel plate using a reflection technique. The flight time of an electron with kinetic energy of E{sub kin}{approx_equal}25 eV is around 100 ns. The corresponding time- and energy resolution are typically {approx_equal}1 ns and {approx_equal}0.65 eV, respectively. The first (e,2e) data obtained with the present setup from a LiF film are presented.
NASA Astrophysics Data System (ADS)
Fereshteh-Saniee, Faramarz; Asgari, Mohammad; Fakhar, Naeimeh
2016-08-01
Despite valuable electrical characteristics, the use of pure aluminum in different applications has been limited due to its low strength. Non-equal channel angular pressing (NECAP) is a recently proposed severe plastic deformation process with greater induced plastic strain and, consequently, better grain refinement in the product, compared with the well-known equal channel angular pressing technique. This research is concerned with the effects of the process temperature and ram velocity on the mechanical, workability and electrical properties of AA1060 aluminum alloy. Increasing the process temperature can concurrently increase the workability, ductility and electrical conductivity, while it has a reverse influence on the strength of the NECAPed specimen, although the strengths of all the products are higher than the as-received alloy. The influence of the ram speed on the mechanical properties of the processed samples is lower than the process temperature. Finally, a compromised process condition is introduced in order to attain a good combination of workability and strength with well-preserved electrical conductivity for electrical applications of components made of pure aluminum.
An improved value of the lunar moment of inertia
NASA Technical Reports Server (NTRS)
Blackshear, W. T.; Gapcynski, J. P.
1977-01-01
The lunar gravitational research reported on by Gapcynski et al., (1975) has been extended to include an additional 600 days of the time variation of ascending node for the Explorer 49 spacecraft. Analysis of these additional data resulted in an improved value of the second-degree zonal harmonic coefficient C(20) = (-2.0219 equal to 0.0091) times 10 to the minus 4. This value of C(20) used in conjunction with the parameters beta equal to libration (631.27 + or - 0.03) times 10 to the minus 6 and gamma to (227.7 + or - 0.7) times 10 to the minus 6 yields a more accurate definition of the lunar moment of inertia ratio equal to 0.391 + or - 0.002.
Assessment of Commitment to Equal Opportunity Goals in the Military
1988-09-30
N ASSESSMENT OF COMMITMENT TO EQUAL OPPORTUNITY GOALS IN THE MILITARX by Carl A. Bartling, Ph.D. Department of Psychology Arkansas Coll*" Batesville...Arkansas for The Defense Equal Opportunity Management Institute Patrick Air Force Base, Florida United States Navy-ASEE 1988 Summer Faculty Research...Commitment to Equal Opportunity Goals in the Military (UNCLASSIFIED) 12. PERSONAL AUTHORM Carl A. Bartling 13. TYPE OF REPORT 113b. TIME COV ERED
Kruskal, Jonathan B; Reedy, Allen; Pascal, Laurie; Rosen, Max P; Boiselle, Phillip M
2012-01-01
Many hospital radiology departments are adopting "lean" methods developed in automobile manufacturing to improve operational efficiency, eliminate waste, and optimize the value of their services. The lean approach, which emphasizes process analysis, has particular relevance to radiology departments, which depend on a smooth flow of patients and uninterrupted equipment function for efficient operation. However, the application of lean methods to isolated problems is not likely to improve overall efficiency or to produce a sustained improvement. Instead, the authors recommend a gradual but continuous and comprehensive "lean transformation" of work philosophy and workplace culture. Fundamental principles that must consistently be put into action to achieve such a transformation include equal involvement of and equal respect for all staff members, elimination of waste, standardization of work processes, improvement of flow in all processes, use of visual cues to communicate and inform, and use of specific tools to perform targeted data collection and analysis and to implement and guide change. Many categories of lean tools are available to facilitate these tasks: value stream mapping for visualizing the current state of a process and identifying activities that add no value; root cause analysis for determining the fundamental cause of a problem; team charters for planning, guiding, and communicating about change in a specific process; management dashboards for monitoring real-time developments; and a balanced scorecard for strategic oversight and planning in the areas of finance, customer service, internal operations, and staff development. © RSNA, 2012.
Responses on a lateralized lexical decision task relate to both reading times and comprehension.
Michael, Mary
2009-12-01
Research over the last few years has shown that the dominance of the left hemisphere in language processing is less complete than previously thought [Beeman, M. (1993). Semantic processing in the right hemisphere may contribute to drawing inferences from discourse. Brain and Language, 44, 80-120; Faust, M., & Chiarello, C. (1998). Sentence context and lexical ambiguity resolution by the two hemispheres. Neuropsychologia, 36(9), 827-835; Weems, S. A., & Zaidel, E. (2004). The relationship between reading ability and lateralized lexical decision. Brain and Cognition, 55(3), 507-515]. Engaging the right brain in language processing is required for processing speaker/writer intention, particularly in those subtle interpretive processes that help in deciphering humor, irony, and emotional inference. In two experiments employing a divided field or lateralized lexical decision task (LLDT), accuracy and reaction times (RTs) were related to reading times and comprehension on sentence reading. Differences seen in RTs and error rates by visual fields were found to relate to performance. Smaller differences in performance between fields tended to be related to better performance on the LLDT in both experiments and, in Experiment 1, to reading measures. Readers who can exploit both hemispheres for language processing equally appear to be at an advantage in lexical access and possibly also in reading performance.
75 FR 62591 - Committee on Equal Opportunities in Science and Engineering (CEOSE); Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-12
... NATIONAL SCIENCE FOUNDATION Committee on Equal Opportunities in Science and Engineering (CEOSE... Equal Opportunities in Science and Engineering (1173). Dates/Time: October 25, 2010, 8:30 a.m.-5:30 p.m... the National Science Foundation (NSF) concerning broadening participation in science and engineering...
Taghavi, Mahmoud; Zazouli, Mohammad Ali; Yousefi, Zabihollah; Akbari-adergani, Behrouz
2015-11-01
In this study, multi-walled carbon nanotubes were functionalized by L-cysteine to show the kinetic and isotherm modeling of Cd (II) ions onto L-cysteine functionalized multi-walled carbon nanotubes. The adsorption behavior of Cd (II) ion was studied by varying parameters including dose of L-MWCNTs, contact time, and cadmium concentration. Equilibrium adsorption isotherms and kinetics were also investigated based on Cd (II) adsorption tests. The results showed that an increase in contact time and adsorbent dosage resulted in increase of the adsorption rate. The optimum condition of the Cd (II) removal process was found at pH=7.0, 15 mg/L L-MWCNTs dosage, 6 mg/L cadmium concentration, and contact time of 60 min. The removal percent was equal to 89.56 at optimum condition. Langmuir and Freundlich models were employed to analyze the experimental data. The data showed well fitting with the Langmuir model (R2=0.994) with q max of 43.47 mg/g. Analyzing the kinetic data by the pseudo-first-order and pseudo-second-order equations revealed that the adsorption of cadmium using L-MWSNTs following the pseudo-second-order kinetic model with correlation coefficients (R2) equals to 0.998, 0.992, and 0.998 for 3, 6, and 9 mg/L Cd (II) concentrations, respectively. The experimental data fitted very well with the pseudo-second-order. Overall, treatment of polluted solution to Cd (II) by adsorption process using L-MWCNT can be considered as an effective technology.
Fast mass spectrometry-based enantiomeric excess determination of proteinogenic amino acids.
Fleischer, Heidi; Thurow, Kerstin
2013-03-01
A rapid determination of the enantiomeric excess of proteinogenic amino acids is of great importance in various fields of chemical and biologic research and industries. Owing to their different biologic effects, enantiomers are interesting research subjects in drug development for the design of new and more efficient pharmaceuticals. Usually, the enantiomeric composition of amino acids is determined by conventional analytical methods such as liquid or gas chromatography or capillary electrophoresis. These analytical techniques do not fulfill the requirements of high-throughput screening due to their relative long analysis times. The method presented allows a fast analysis of chiral amino acids without previous time consuming chromatographic separation. The analytical measurements base on parallel kinetic resolution with pseudoenantiomeric mass tagged auxiliaries and were carried out by mass spectrometry with electrospray ionization. All 19 chiral proteinogenic amino acids were tested and Pro, Ser, Trp, His, and Glu were selected as model substrates for verification measurements. The enantiomeric excesses of amino acids with non-polar and aliphatic side chains as well as Trp and Phe (aromatic side chains) were determined with maximum deviations of the expected value less than or equal to 10ee%. Ser, Cys, His, Glu, and Asp were determined with deviations lower or equal to 14ee% and the enantiomeric excess of Tyr were calculated with 17ee% deviation. The total screening process is fully automated from the sample pretreatment to the data processing. The method presented enables fast measurement times about 1.38 min per sample and is applicable in the scope of high-throughput screenings.
40 CFR 86.1309-90 - Exhaust gas sampling system; Otto-cycle and non-petroleum-fueled engines.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., shall exceed either 2.5 mg/l or a concentration equal to 25 times the limit of detection for the HPLC..., shall exceed either 2.5 mg/l or a concentration equal to 25 times the limit of detection for the HPLC...
40 CFR 86.1309-90 - Exhaust gas sampling system; Otto-cycle and non-petroleum-fueled engines.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., shall exceed either 2.5 mg/l or a concentration equal to 25 times the limit of detection for the HPLC..., shall exceed either 2.5 mg/l or a concentration equal to 25 times the limit of detection for the HPLC...
49 CFR 178.345-3 - Structural integrity.
Code of Federal Regulations, 2014 CFR
2014-10-01
... accelerative force equal to 0.35 times the vertical reaction at the suspension assembly of a trailer; or the... the suspension assembly of a trailer, and the horizontal pivot of the upper coupler (fifth wheel) or... normal operating accelerative force equal to 0.35 times the vertical reaction at the suspension assembly...
49 CFR 178.345-3 - Structural integrity.
Code of Federal Regulations, 2012 CFR
2012-10-01
... accelerative force equal to 0.35 times the vertical reaction at the suspension assembly of a trailer; or the... the suspension assembly of a trailer, and the horizontal pivot of the upper coupler (fifth wheel) or... normal operating accelerative force equal to 0.35 times the vertical reaction at the suspension assembly...
49 CFR 178.345-3 - Structural integrity.
Code of Federal Regulations, 2013 CFR
2013-10-01
... accelerative force equal to 0.35 times the vertical reaction at the suspension assembly of a trailer; or the... the suspension assembly of a trailer, and the horizontal pivot of the upper coupler (fifth wheel) or... normal operating accelerative force equal to 0.35 times the vertical reaction at the suspension assembly...
Cappell, M S; Spray, D C; Bennett, M V
1988-06-28
Protractor muscles in the gastropod mollusc Navanax inermis exhibit typical spontaneous miniature end plate potentials with mean amplitude 1.71 +/- 1.19 (standard deviation) mV. The evoked end plate potential is quantized, with a quantum equal to the miniature end plate potential amplitude. When their rate is stationary, occurrence of miniature end plate potentials is a random, Poisson process. When non-stationary, spontaneous miniature end plate potential occurrence is a non-stationary Poisson process, a Poisson process with the mean frequency changing with time. This extends the random Poisson model for miniature end plate potentials to the frequently observed non-stationary occurrence. Reported deviations from a Poisson process can sometimes be accounted for by the non-stationary Poisson process and more complex models, such as clustered release, are not always needed.
A predator equalizes rate of capture of a schooling prey in a patchy environment.
Vijayan, Sundararaj; Kotler, Burt P; Abramsky, Zvika
2017-05-01
Prey individuals are often distributed heterogeneously in the environment, and their abundances and relative availabilities vary among patches. A foraging predator should maximize energetic gains by selectively choosing patches with higher prey density. However, catching behaviorally responsive and group-forming prey in patchy environments can be a challenge for predators. First, they have to identify the profitable patches, and second, they must manage the prey's sophisticated anti-predator behavior. Thus, the forager and its prey have to continuously adjust their behavior to that of their opponent. Given these conditions, the foraging predator's behavior should be dynamic with time in terms of foraging effort and prey capture rates across different patches. Theoretically, the allocation of its time among patches of behaviorally responsive prey should be such that it equalizes its prey capture rates across patches through time. We tested this prediction in a model system containing a predator (little egret) and group-forming prey (common gold fish) in two sets of experiments in which (1) patches (pools) contained equal numbers of prey, or in which (2) patches contained unequal densities of prey. The egret equalized the prey capture rate through time in both equal and different density experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Brenner, Howard
2011-10-01
Linear irreversible thermodynamic principles are used to demonstrate, by counterexample, the existence of a fundamental incompleteness in the basic pre-constitutive mass, momentum, and energy equations governing fluid mechanics and transport phenomena in continua. The demonstration is effected by addressing the elementary case of steady-state heat conduction (and transport processes in general) occurring in quiescent fluids. The counterexample questions the universal assumption of equality of the four physically different velocities entering into the basic pre-constitutive mass, momentum, and energy conservation equations. Explicitly, it is argued that such equality is an implicit constitutive assumption rather than an established empirical fact of unquestioned authority. Such equality, if indeed true, would require formal proof of its validity, currently absent from the literature. In fact, our counterexample shows the assumption of equality to be false. As the current set of pre-constitutive conservation equations appearing in textbooks are regarded as applicable both to continua and noncontinua (e.g., rarefied gases), our elementary counterexample negating belief in the equality of all four velocities impacts on all aspects of fluid mechanics and transport processes, continua and noncontinua alike.
Adaptive frequency-domain equalization in digital coherent optical receivers.
Faruk, Md Saifuddin; Kikuchi, Kazuro
2011-06-20
We propose a novel frequency-domain adaptive equalizer in digital coherent optical receivers, which can reduce computational complexity of the conventional time-domain adaptive equalizer based on finite-impulse-response (FIR) filters. The proposed equalizer can operate on the input sequence sampled by free-running analog-to-digital converters (ADCs) at the rate of two samples per symbol; therefore, the arbitrary initial sampling phase of ADCs can be adjusted so that the best symbol-spaced sequence is produced. The equalizer can also be configured in the butterfly structure, which enables demultiplexing of polarization tributaries apart from equalization of linear transmission impairments. The performance of the proposed equalization scheme is verified by 40-Gbits/s dual-polarization quadrature phase-shift keying (QPSK) transmission experiments.
Wu, Yin; Hu, Jie; van Dijk, Eric; Leliveld, Marijke C.; Zhou, Xiaolin
2012-01-01
Previous behavioral studies have shown that initial ownership influences individuals’ fairness consideration and other-regarding behavior. However, it is not entirely clear whether initial ownership influences the brain activity when a recipient evaluates the fairness of asset distribution. In this study, we randomly assigned the bargaining property (monetary reward) to either the allocator or the recipient in the ultimatum game and let participants of the study, acting as recipients, receive either disadvantageous unequal, equal, or advantageous unequal offers from allocators while the event-related potentials (ERPs) were recorded. Behavioral results showed that participants were more likely to reject disadvantageous unequal and equal offers when they initially owned the property as compared to when they did not. The two types of unequal offers evoked more negative going ERPs (the MFN) than the equal offers in an early time window and the differences were not modulated by the initial ownership. In a late time window, however, the P300 responses to division schemes were affected not only by the type of unequal offers but also by whom the property was initially assigned to. These findings suggest that while the MFN may function as a general mechanism that evaluates whether the offer is consistent or inconsistent with the equity rule, the P300 is sensitive to top-down controlled processes, into which factors related to the allocation of attentional resources, including initial ownership and personal interests, come to play. PMID:22761850
Anticorrelation between changes of Hα spectral line FWHM and Doppler velocities
NASA Astrophysics Data System (ADS)
Khutsishvili, David; Zaqarashvili, Teimuraz; Khutsishvili, Eldar; Kvernadze, Teimuraz; Kulijanishvili, Vazha; Kakhiani, Vova; Sikharulidze, Maya
From September the 25 th , 2012 through October 17, 18 and 19, 2012, new series of Hα spicule spectrograms for 7,500 km heights in the solar chromosphere were obtained by using a 53-cm large non-eclipsing coronagraph of Abastumani Astrophysical Observatory (Georgia). Spectrograms in Hα line were obtained in a second series of the spectrograph, where reversed dispersion equaled to 0.96 Å/mm. Doppler velocities and half-widths of 10 spicules were measured with the cadence of 4.5 sec and standard error equals to ±0.3 km/sec and 0.03 Å. Life times of almost all measured spicules were 12-16 min. Therefore, they resemble the type I spicules.To study and find periodical changes of Hα FWHM, we used the Lomb periodogram algorithm for unevenly distributed time series. We also processed Doppler velocities using the same algorithm for the same spicules in the same images. The confidence levels for our data equaled to 9.0 for 95% and 10.7 for 99% in power units. The periods are mostly above 2 min (> 180 sec). Most periods fall between 5-9 min (300-540 sec). In order to see the possible relations between the changes of Hα FWHM and Doppler velocities, we performed Low Pass FFT Filtering with different cut-off frequencies: 60 sec (0.016 Hz), 100 sec (0.01 Hz) and 200 sec (0.005 Hz). All 10 spicules show clearly anticorreleation properties, especially for the longest periodical changes.
Perceptron Genetic to Recognize Openning Strategy Ruy Lopez
NASA Astrophysics Data System (ADS)
Azmi, Zulfian; Mawengkang, Herman
2018-01-01
The application of Perceptron method is not effective for coding on hardware based systems because it is not real time learning. With Genetic algorithm approach in calculating and searching the best weight (fitness value) system will do learning only one iteration. And the results of this analysis were tested in the case of the introduction of the opening pattern of chess Ruy Lopez. The Analysis with Perceptron Model with Algorithm Approach Genetics from group Artificial Neural Network for open Ruy Lopez. The data is processed with base open chess, with step eight a position white Pion from end open chess. Using perceptron method have many input and one output process many weight and refraction until output equal goal. Data trained and test with software Matlab and system can recognize the chess opening Ruy Lopez or Not open Ruy Lopez with Real time.
The stem cell laboratory: design, equipment, and oversight.
Wesselschmidt, Robin L; Schwartz, Philip H
2011-01-01
This chapter describes some of the major issues to be considered when setting up a laboratory for the culture of human pluripotent stem cells (hPSCs). The process of establishing a hPSC laboratory can be divided into two equally important parts. One is completely administrative and includes developing protocols, seeking approval, and establishing reporting processes and documentation. The other part of establishing a hPSC laboratory involves the physical plant and includes design, equipment and personnel. Proper planning of laboratory operations and proper design of the physical layout of the stem cell laboratory so that meets the scope of planned operations is a major undertaking, but the time spent upfront will pay long-term returns in operational efficiency and effectiveness. A well-planned, organized, and properly equipped laboratory supports research activities by increasing efficiency and reducing lost time and wasted resources.
NASA Technical Reports Server (NTRS)
Kyle, R. G.
1972-01-01
Information transfer between the operator and computer-generated display systems is an area where the human factors engineer discovers little useful design data relating human performance to system effectiveness. This study utilized a computer-driven, cathode-ray-tube graphic display to quantify human response speed in a sequential information processing task. The performance criteria was response time to sixteen cell elements of a square matrix display. A stimulus signal instruction specified selected cell locations by both row and column identification. An equal probable number code, from one to four, was assigned at random to the sixteen cells of the matrix and correspondingly required one of four, matched keyed-response alternatives. The display format corresponded to a sequence of diagnostic system maintenance events, that enable the operator to verify prime system status, engage backup redundancy for failed subsystem components, and exercise alternate decision-making judgements. The experimental task bypassed the skilled decision-making element and computer processing time, in order to determine a lower bound on the basic response speed for given stimulus/response hardware arrangement.
Numerical Analysis of Heat Transfer During Quenching Process
NASA Astrophysics Data System (ADS)
Madireddi, Sowjanya; Krishnan, Krishnan Nambudiripad; Reddy, Ammana Satyanarayana
2018-04-01
A numerical model is developed to simulate the immersion quenching process of metals. The time of quench plays an important role if the process involves a defined step quenching schedule to obtain the desired characteristics. Lumped heat capacity analysis used for this purpose requires the value of heat transfer coefficient, whose evaluation requires large experimental data. Experimentation on a sample work piece may not represent the actual component which may vary in dimension. A Fluid-Structure interaction technique with a coupled interface between the solid (metal) and liquid (quenchant) is used for the simulations. Initial times of quenching shows boiling heat transfer phenomenon with high values of heat transfer coefficients (5000-2.5 × 105 W/m2K). Shape of the work piece with equal dimension shows less influence on the cooling rate Non-uniformity in hardness at the sharp corners can be reduced by rounding off the edges. For a square piece of 20 mm thickness, with 3 mm fillet radius, this difference is reduced by 73 %. The model can be used for any metal-quenchant combination to obtain time-temperature data without the necessity of experimentation.
Temporo-parietal junction activity in theory-of-mind tasks: falseness, beliefs, or attention.
Aichhorn, Markus; Perner, Josef; Weiss, Benjamin; Kronbichler, Martin; Staffen, Wolfgang; Ladurner, Gunther
2009-06-01
By combining the false belief (FB) and photo (PH) vignettes to identify theory-of-mind areas with the false sign (FS) vignettes, we re-establish the functional asymmetry between the left and right temporo-parietal junction (TPJ). The right TPJ (TPJ-R) is specially sensitive to processing belief information, whereas the left TPJ (TPJ-L) is equally responsible for FBs as well as FSs. Measuring BOLD at two time points in each vignette, at the time the FB-inducing information (or lack of information) is presented and at the time the test question is processed, made clear that the FB is processed spontaneously as soon as the relevant information is presented and not on demand for answering the question in contrast to extant behavioral data. Finally, a fourth, true belief vignette (TB) required teleological reasoning, that is, prediction of a rational action without any doubts being raised about the adequacy of the actor's information about reality. Activation by this vignette supported claims that the TPJ-R is activated by TBs as well as FBs.
2007-12-01
processing route at this level. A recent study by Garcia-Infanta, et al., of a hypo- eutectic Al-7%Si alloy with spheroidal primary aluminum grains is a...compared with the model proposed by Garcia-Infanta, et al. [10]. Further, annealing studies will be performed to determine the recrystallization ...study conducted at 450°C as a function of time to assess recrystallization and grain growth. Two data points per sample were taken from different
a Comparative Case Study of Reflection Seismic Imaging Method
NASA Astrophysics Data System (ADS)
Alamooti, M.; Aydin, A.
2017-12-01
Seismic imaging is the most common means of gathering information about subsurface structural features. The accuracy of seismic images may be highly variable depending on the complexity of the subsurface and on how seismic data is processed. One of the crucial steps in this process, especially in layered sequences with complicated structure, is the time and/or depth migration of seismic data.The primary purpose of the migration is to increase the spatial resolution of seismic images by repositioning the recorded seismic signal back to its original point of reflection in time/space, which enhances information about complex structure. In this study, our objective is to process a seismic data set (courtesy of the University of South Carolina) to generate an image on which the Magruder fault near Allendale SC can be clearly distinguished and its attitude can be accurately depicted. The data was gathered by common mid-point method with 60 geophones equally spaced along an about 550 m long traverse over a nearly flat ground. The results obtained from the application of different migration algorithms (including finite-difference and Kirchhoff) are compared in time and depth domains to investigate the efficiency of each algorithm in reducing the processing time and improving the accuracy of seismic images in reflecting the correct position of the Magruder fault.
NASA Astrophysics Data System (ADS)
Jiang, Fuhong; Zhang, Xingong; Bai, Danyu; Wu, Chin-Chia
2018-04-01
In this article, a competitive two-agent scheduling problem in a two-machine open shop is studied. The objective is to minimize the weighted sum of the makespans of two competitive agents. A complexity proof is presented for minimizing the weighted combination of the makespan of each agent if the weight α belonging to agent B is arbitrary. Furthermore, two pseudo-polynomial-time algorithms using the largest alternate processing time (LAPT) rule are presented. Finally, two approximation algorithms are presented if the weight is equal to one. Additionally, another approximation algorithm is presented if the weight is larger than one.
Krizsan, Andrea; Popa, Raluca Maria
2014-07-01
The article looks at the translation of international norms on domestic violence to the national level in five Central and Eastern European countries. It argues that translation brings a concept of domestic violence, which stretches gender equality ideas underpinning international norms so as to be easier to endorse by mainstream policy actors, and results in policies framed in degendered individual rights terms. The potential for keeping gender equality in focus is then guaranteed by gendering policy processes through empowerment of gender equality actors at all stages. Absence of ownership of the policy by gender equality actors risks co-optation by frames contesting gender equality. © The Author(s) 2014.
NASA Technical Reports Server (NTRS)
Garbeff, Theodore J., II; Panda, Jayanta; Ross, James C.
2017-01-01
Time-Resolved shadowgraph and infrared (IR) imaging were performed to investigate off-body and on-body flow features of a generic, 'hammer-head' launch vehicle geometry previously tested by Coe and Nute (1962). The measurements discussed here were one part of a large range of wind tunnel test techniques that included steady-state pressure sensitive paint (PSP), dynamic PSP, unsteady surface pressures, and unsteady force measurements. Image data was captured over a Mach number range of 0.6 less than or equal to M less than or equal to 1.2 at a Reynolds number of 3 million per foot. Both shadowgraph and IR imagery were captured in conjunction with unsteady pressures and forces and correlated with IRIG-B timing. High-speed shadowgraph imagery was used to identify wake structure and reattachment behind the payload fairing of the vehicle. Various data processing strategies were employed and ultimately these results correlated well with the location and magnitude of unsteady surface pressure measurements. Two research grade IR cameras were positioned to image boundary layer transition at the vehicle nose and flow reattachment behind the payload fairing. The poor emissivity of the model surface treatment (fast PSP) proved to be challenging for the infrared measurement. Reference image subtraction and contrast limited adaptive histogram equalization (CLAHE) were used to analyze this dataset. Ultimately turbulent boundary layer transition was observed and located forward of the trip dot line at the model sphere-cone junction. Flow reattachment location was identified behind the payload fairing in both steady and unsteady thermal data. As demonstrated in this effort, recent advances in high-speed and thermal imaging technology have modernized classical techniques providing a new viewpoint for the modern researcher
An Application of the Equal Pay Act to Higher Education.
ERIC Educational Resources Information Center
Green, Debra H.
1981-01-01
The applicability of legal principles governing equal pay and sex discrimination in university settings is discussed. The most objective mechanism that a university can utilize to achieve compliance with the Equal Pay Act would be implementation of a salary system that relies on experience, formal education, and time in grade. (MLW)
Parallel-Processing Equalizers for Multi-Gbps Communications
NASA Technical Reports Server (NTRS)
Gray, Andrew; Ghuman, Parminder; Hoy, Scott; Satorius, Edgar H.
2004-01-01
Architectures have been proposed for the design of frequency-domain least-mean-square complex equalizers that would be integral parts of parallel- processing digital receivers of multi-gigahertz radio signals and other quadrature-phase-shift-keying (QPSK) or 16-quadrature-amplitude-modulation (16-QAM) of data signals at rates of multiple gigabits per second. Equalizers as used here denotes receiver subsystems that compensate for distortions in the phase and frequency responses of the broad-band radio-frequency channels typically used to convey such signals. The proposed architectures are suitable for realization in very-large-scale integrated (VLSI) circuitry and, in particular, complementary metal oxide semiconductor (CMOS) application- specific integrated circuits (ASICs) operating at frequencies lower than modulation symbol rates. A digital receiver of the type to which the proposed architecture applies (see Figure 1) would include an analog-to-digital converter (A/D) operating at a rate, fs, of 4 samples per symbol period. To obtain the high speed necessary for sampling, the A/D and a 1:16 demultiplexer immediately following it would be constructed as GaAs integrated circuits. The parallel-processing circuitry downstream of the demultiplexer, including a demodulator followed by an equalizer, would operate at a rate of only fs/16 (in other words, at 1/4 of the symbol rate). The output from the equalizer would be four parallel streams of in-phase (I) and quadrature (Q) samples.
Takesue, Hirofumi; Miyauchi, Carlos Makoto; Sakaiya, Shiro; Fan, Hongwei; Matsuda, Tetsuya; Kato, Junko
2017-07-19
In the pursuance of equality, behavioural scientists disagree about distinct motivators, that is, consideration of others and prospective calculation for oneself. However, accumulating data suggest that these motivators may share a common process in the brain whereby perspectives and events that did not arise in the immediate environment are conceived. To examine this, we devised a game imitating a real decision-making situation regarding redistribution among income classes in a welfare state. The neural correlates of redistributive decisions were examined under contrasting conditions, with and without uncertainty, which affects support for equality in society. The dorsal anterior cingulate cortex (dACC) and the caudate nucleus were activated by equality decisions with uncertainty but by selfless decisions without uncertainty. Activation was also correlated with subjective values. Activation in both the dACC and the caudate nucleus was associated with the attitude to prefer accordance with others, whereas activation in the caudate nucleus reflected that the expected reward involved the prospective calculation of relative income. The neural correlates suggest that consideration of others and prospective calculation for oneself may underlie the support for equality. Projecting oneself into the perspective of others and into prospective future situations may underpin the pursuance of equality.
26 CFR 26.2642-6 - Qualified severance.
Code of Federal Regulations, 2012 CFR
2012-04-01
... from time to time, in equal or unequal shares, for the benefit of any one or more members of the group.... For this purpose, the fraction or percentage may be determined by means of a formula (for example, that fraction of the trust the numerator of which is equal to the transferor's unused GST tax exemption...
26 CFR 26.2642-6 - Qualified severance.
Code of Federal Regulations, 2010 CFR
2010-04-01
... from time to time, in equal or unequal shares, for the benefit of any one or more members of the group.... For this purpose, the fraction or percentage may be determined by means of a formula (for example, that fraction of the trust the numerator of which is equal to the transferor's unused GST tax exemption...
5 CFR 550.114 - Compensatory time off.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) may grant compensatory time off from an employee's basic work requirement under a flexible work schedule under 5 U.S.C. 6122 instead of payment under § 550.113 for an equal amount of overtime work... duty instead of payment under § 550.113 for an equal amount of irregular or occasional overtime work...
2013-01-01
than demographic diversity ( Ivancevich & Gilbert, 2000); the goal of equality is to create and manage a heterogeneous mix of abilities, skills, ideas...accepted. Recruiting of minorities and women are not seen as violations of EO laws (Kravitz, 2008; Newman & Lyon , 2009; Pyburn, et al., 2008). Similarly...209-213. REGULATORY FIT AND EQUAL OPPORTUNITY/DIVERSITY 23 Ivancevich , J. M. & Gilbert, J. A. (2000). Diversity management: Time for a new approach
NASA Technical Reports Server (NTRS)
Rowlands, D. D.; Luthcke, S. B.; McCarthy J. J.; Klosko, S. M.; Chinn, D. S.; Lemoine, F. G.; Boy, J.-P.; Sabaka, T. J.
2010-01-01
The differences between mass concentration (mas con) parameters and standard Stokes coefficient parameters in the recovery of gravity infonnation from gravity recovery and climate experiment (GRACE) intersatellite K-band range rate data are investigated. First, mascons are decomposed into their Stokes coefficient representations to gauge the range of solutions available using each of the two types of parameters. Next, a direct comparison is made between two time series of unconstrained gravity solutions, one based on a set of global equal area mascon parameters (equivalent to 4deg x 4deg at the equator), and the other based on standard Stokes coefficients with each time series using the same fundamental processing of the GRACE tracking data. It is shown that in unconstrained solutions, the type of gravity parameter being estimated does not qualitatively affect the estimated gravity field. It is also shown that many of the differences in mass flux derivations from GRACE gravity solutions arise from the type of smoothing being used and that the type of smoothing that can be embedded in mas con solutions has distinct advantages over postsolution smoothing. Finally, a 1 year time series based on global 2deg equal area mascons estimated every 10 days is presented.
Outpatient Waiting Time in Health Services and Teaching Hospitals: A Case Study in Iran
Mohebbifar, Rafat; Hasanpoor, Edris; Mohseni, Mohammad; Sokhanvar, Mobin; Khosravizadeh, Omid; Isfahani, Haleh Mousavi
2014-01-01
Background: One of the most important indexes of the health care quality is patient’s satisfaction and it takes place only when there is a process based on management. One of these processes in the health care organizations is the appropriate management of the waiting time process. The aim of this study is the systematic analyzing of the outpatient waiting time. Methods: This descriptive cross sectional study conducted in 2011 is an applicable study performed in the educational and health care hospitals of one of the medical universities located in the north west of Iran. Since the distributions of outpatients in all the months were equal, sampling stage was used. 160 outpatients were studied and the data was analyzed by using SPSS software. Results: Results of the study showed that the waiting time for the outpatients of ophthalmology clinic with an average of 245 minutes for each patient allocated the maximum time among the other clinics for itself. Orthopedic clinic had the minimal waiting time including an average of 77 minutes per patient. The total average waiting time for each patient in the educational hospitals under this study was about 161 minutes. Conclusion: by applying some models, we can reduce the waiting time especially in the realm of time and space before the admission to the examination room. Utilizing the models including the one before admission, electronic visit systems via internet, a process model, six sigma model, queuing theory model and FIFO model, are the components of the intervention that reduces the outpatient waiting time. PMID:24373277
40 CFR Table 3 to Subpart Dddd of... - Work Practice Requirements
Code of Federal Regulations, 2014 CFR
2014-07-01
... Process furnish with a 24-hour block average inlet moisture content of less than or equal to 30 percent... 24-hour block average inlet moisture content of the veneer is less than or equal to 25 percent (by...
Method of detecting system function by measuring frequency response
NASA Technical Reports Server (NTRS)
Morrison, John L. (Inventor); Morrison, William H. (Inventor); Christophersen, Jon P. (Inventor)
2012-01-01
Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. The time profile of this signal has a duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time record by rectifying relative to the sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.
Method of detecting system function by measuring frequency response
Morrison, John L [Butte, MT; Morrison, William H [Manchester, CT; Christophersen, Jon P [Idaho Falls, ID
2012-04-03
Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. The time profile of this signal has a duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time record by rectifying relative to the sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.
2017-06-01
ARL-TR-8047 ● JUNE 2017 US Army Research Laboratory Fabrication of High -Strength Lightweight Metals for Armor and Structural...to the originator. ARL-TR-8047 ● JUNE 2017 US Army Research Laboratory Fabrication of High -Strength Lightweight Metals for...Fabrication of High -Strength Lightweight Metals for Armor and Structural Applications: Large-Scale Equal Channel Angular Extrusion Processing of
[Optimizing histological image data for 3-D reconstruction using an image equalizer].
Roth, A; Melzer, K; Annacker, K; Lipinski, H G; Wiemann, M; Bingmann, D
2002-01-01
Bone cells form a wired network within the extracellular bone matrix. To analyse this complex 3D structure, we employed a confocal fluorescence imaging procedure to visualize live bone cells within their native surrounding. By means of newly developed image processing software, the "Image-Equalizer", we aimed to enhanced the contrast and eliminize artefacts in such a way that cell bodies as well as fine interconnecting processes were visible.
26 CFR 1.684-1 - Recognition of gain on transfers to certain foreign trusts and estates.
Code of Federal Regulations, 2010 CFR
2010-04-01
... recognize gain at the time of the transfer equal to the excess of the fair market value of the property... portion of FT. Under paragraph (a)(1) of this section, A recognizes gain at the time of the transfer equal... of 1000X, and property R, with a fair market value of 2000X, to FT. At the time of the transfer, A's...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu
This paper intends to reveal the ability of the linear interpolation method to predict missing values in solar radiation time series. Reliable dataset is equally tends to complete time series observed dataset. The absence or presence of radiation data alters long-term variation of solar radiation measurement values. Based on that change, the opportunities to provide bias output result for modelling and the validation process is higher. The completeness of the observed variable dataset has significantly important for data analysis. Occurrence the lack of continual and unreliable time series solar radiation data widely spread and become the main problematic issue. However,more » the limited number of research quantity that has carried out to emphasize and gives full attention to estimate missing values in the solar radiation dataset.« less
NASA Astrophysics Data System (ADS)
Zhokh, Alexey A.; Strizhak, Peter E.
2018-04-01
The solutions of the time-fractional diffusion equation for the short and long times are obtained via an application of the asymptotic Green's functions. The derived solutions are applied to analysis of the methanol mass transfer through H-ZSM-5/alumina catalyst grain. It is demonstrated that the methanol transport in the catalysts pores may be described by the obtained solutions in a fairly good manner. The measured fractional exponent is equal to 1.20 ± 0.02 and reveals the super-diffusive regime of the methanol mass transfer. The presence of the anomalous transport may be caused by geometrical restrictions and the adsorption process on the internal surface of the catalyst grain's pores.
Image enhancement software for underwater recovery operations: User's manual
NASA Astrophysics Data System (ADS)
Partridge, William J.; Therrien, Charles W.
1989-06-01
This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.
NASA Astrophysics Data System (ADS)
Andriyah, L.; Sulistiyono, E.
2017-02-01
One of the step in manganese dioxide manufacturing process for battery industry is a purification process of lithium manganese sulphate solution. The elimination of impurities such as iron removal is important in hydrometallurgical processes. Therefore, this paper present the purification results of manganese sulphate solution by removing impurities using a selective deposition method, namely activated carbon adsorption and NaOH. The experimental results showed that the optimum condition of adsorption process occurs on the addition of 5 g adsorbent and the addition of 10 ml NaOH 1 N, processing time of 30 minutes and the best is the activated carbon adsorption of Japan. Because the absolute requirement of the cathode material of lithium ion manganese are free of titanium then of local wood charcoal is good enough in terms of eliminating ions Ti is equal to 70.88%.
Production of continuous piezoelectric ceramic fibers for smart materials and active control devices
NASA Astrophysics Data System (ADS)
French, Jonathan D.; Weitz, Gregory E.; Luke, John E.; Cass, Richard B.; Jadidian, Bahram; Bhargava, Parag; Safari, Ahmad
1997-05-01
Advanced Cerametrics Inc. has conceived of and developed the Viscous-Suspension-Spinning Process (VSSP) to produce continuous fine filaments of nearly any powdered ceramic materials. VSSP lead zirconate titanate (PZT) fiber tows with 100 and 790 filaments have been spun in continuous lengths exceeding 1700 meters. Sintered PZT filaments typically are 10 - 25 microns in diameter and have moderate flexibility. Prior to carrier burnout and sintering, VSSP PZT fibers can be formed into 2D and 3D shapes using conventional textile and composite forming processes. While the extension of PZT is on the order of 20 microns per linear inch, a woven, wound or braided structure can contain very long lengths of PZT fiber and generate comparatively large output strokes from relatively small volumes. These structures are intended for applications such as bipolar actuators for fiber optic assembly and repair, vibration and noise damping for aircraft, rotorcraft, automobiles and home applications, vibration generators and ultrasonic transducers for medical and industrial imaging. Fiber and component cost savings over current technologies, such as the `dice-and-fill' method for transducer production, and the range of unique structures possible with continuous VSSP PZT fiber are discussed. Recent results have yielded 1-3 type composites (25 vol% PZT) with d33 equals 340 pC/N, K equals 470, and g33 equals 80 mV/N, kt equals 0.54, kp equals 0.19, dh equals 50.1pC/N and gh equals 13 mV/N.
Differential transfer processes in incremental visuomotor adaptation.
Seidler, Rachel D
2005-01-01
Visuomotor adaptive processes were examined by testing transfer of adaptation between similar conditions. Participants made manual aiming movements with a joystick to hit targets on a computer screen, with real-time feedback display of their movement. They adapted to three different rotations of the display in a sequential fashion, with a return to baseline display conditions between rotations. Adaptation was better when participants had prior adaptive experiences. When performance was assessed using direction error (calculated at the time of peak velocity) and initial endpoint error (error before any overt corrective actions), transfer was greater when the final rotation reflected an addition of previously experienced rotations (adaptation order 30 degrees rotation, 15 degrees, 45 degrees) than when it was a subtraction of previously experienced conditions (adaptation order 45 degrees rotation, 15 degrees, 30 degrees). Transfer was equal regardless of adaptation order when performance was assessed with final endpoint error (error following any discrete, corrective actions). These results imply the existence of multiple independent processes in visuomotor adaptation.
Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro
2010-03-01
We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.
Psychological and Neural Mechanisms of Subjective Time Dilation
van Wassenhove, Virginie; Wittmann, Marc; Craig, A. D. (Bud); Paulus, Martin P.
2011-01-01
For a given physical duration, certain events can be experienced as subjectively longer in duration than others. Try this for yourself: take a quick glance at the second hand of a clock. Immediately, the tick will pause momentarily and appear to be longer than the subsequent ticks. Yet, they all last exactly 1 s. By and large, a deviant or an unexpected stimulus in a series of similar events (same duration, same features) can elicit a relative overestimation of subjective time (or “time dilation”) but, as is shown here, this is not always the case. We conducted an event-related functional magnetic neuroimaging study on the time dilation effect. Participants were presented with a series of five visual discs, all static and of equal duration (standards) except for the fourth one, a looming or a receding target. The duration of the target was systematically varied and participants judged whether it was shorter or longer than all other standards in the sequence. Subjective time dilation was observed for the looming stimulus but not for the receding one, which was estimated to be of equal duration to the standards. The neural activation for targets (looming and receding) contrasted with the standards revealed an increased activation of the anterior insula and of the anterior cingulate cortex. Contrasting the looming with the receding targets (i.e., capturing the time dilation effect proper) revealed a specific activation of cortical midline structures. The implication of midline structures in the time dilation illusion is here interpreted in the context of self-referential processes. PMID:21559346
NASA Astrophysics Data System (ADS)
Li, Xiang; Luo, Ming; Qiu, Ying; Alphones, Arokiaswami; Zhong, Wen-De; Yu, Changyuan; Yang, Qi
2018-02-01
In this paper, channel equalization techniques for coherent optical fiber transmission systems based on independent component analysis (ICA) are reviewed. The principle of ICA for blind source separation is introduced. The ICA based channel equalization after both single-mode fiber and few-mode fiber transmission for single-carrier and orthogonal frequency division multiplexing (OFDM) modulation formats are investigated, respectively. The performance comparisons with conventional channel equalization techniques are discussed.
29 CFR 1620.9 - Meaning of “establishment.”
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false Meaning of âestablishment.â 1620.9 Section 1620.9 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION THE EQUAL PAY ACT § 1620.9... acquired a well settled meaning by the time of enactment of the Equal Pay Act. It refers to a distinct...
Can Education Equality Trickle-Down to Economic Growth? The Case of Korea
ERIC Educational Resources Information Center
Ilon, Lynn
2011-01-01
Education equality is generally neglected in the literature that investigates education's contribution to economic growth. This paper examines the case of Korea where economic growth, education equality (as measured by years of schooling), and educational quality have all been on the rise for many decades. Using time series data on schooling for…
NASA Astrophysics Data System (ADS)
Johnson, Kristina Mary
In 1973 the computerized tomography (CT) scanner revolutionized medical imaging. This machine can isolate and display in two-dimensional cross-sections, internal lesions and organs previously impossible to visualize. The possibility of three-dimensional imaging however is not yet exploited by present tomographic systems. Using multiple-exposure holography, three-dimensional displays can be synthesizing from two-dimensional CT cross -sections. A multiple-exposure hologram is an incoherent superposition of many individual holograms. Intuitively it is expected that holograms recorded with equal energy will reconstruct images with equal brightness. It is found however, that holograms recorded first are brighter than holograms recorded later in the superposition. This phenomena is called Holographic Reciprocity Law Failure (HRLF). Computer simulations of latent image formation in multiple-exposure holography are one of the methods used to investigate HRLF. These simulations indicate that it is the time between individual exposures in the multiple -exposure hologram that is responsible for HRLF. This physical parameter introduces an asymmetry into the latent image formation process that favors the signal of previously recorded holograms over holograms recorded later in the superposition. The origin of this asymmetry lies in the dynamics of latent image formation, and in particular in the decay of single-atom latent image specks, which have lifetimes that are short compared to typical times between exposures. An analytical model is developed for a double exposure hologram that predicts a decrease in the brightness of the second exposure as compared to the first exposure as the time between exposures increases. These results are consistent with the computer simulations. Experiments investigating the influence of this parameter on the diffraction efficiency of reconstructed images in a double exposure hologram are also found to be consistent with the computer simulations and analytical results. From this information, two techniques are presented that correct for HRLF, and succeed in reconstructing multiple holographic images of CT cross-sections with equal brightness. The multiple multiple-exposure hologram is a new hologram that increases the number of equally bright images that can be superimposed on one photographic plate.
ERIC Educational Resources Information Center
Update on the Courts, 1996
1996-01-01
This serial issue concerns itself with several conflicts between individual rights and allegedly wrongful acts that the Supreme Court has not considered previously. The articles on these topics illuminate the constitutional issues of equal protection, due process, and freedom of expression. Specific issues addressed include: (1) equal educational…
Quantum work statistics of charged Dirac particles in time-dependent fields
Deffner, Sebastian; Saxena, Avadh
2015-09-28
The quantum Jarzynski equality is an important theorem of modern quantum thermodynamics. We show that the Jarzynski equality readily generalizes to relativistic quantum mechanics described by the Dirac equation. After establishing the conceptual framework we solve a pedagogical, yet experimentally relevant, system analytically. As a main result we obtain the exact quantum work distributions for charged particles traveling through a time-dependent vector potential evolving under Schrödinger as well as under Dirac dynamics, and for which the Jarzynski equality is verified. Thus, special emphasis is put on the conceptual and technical subtleties arising from relativistic quantum mechanics.
Hatz, F; Hardmeier, M; Bousleiman, H; Rüegg, S; Schindler, C; Fuhr, P
2015-02-01
To compare the reliability of a newly developed Matlab® toolbox for the fully automated, pre- and post-processing of resting state EEG (automated analysis, AA) with the reliability of analysis involving visually controlled pre- and post-processing (VA). 34 healthy volunteers (age: median 38.2 (20-49), 82% female) had three consecutive 256-channel resting-state EEG at one year intervals. Results of frequency analysis of AA and VA were compared with Pearson correlation coefficients, and reliability over time was assessed with intraclass correlation coefficients (ICC). Mean correlation coefficient between AA and VA was 0.94±0.07, mean ICC for AA 0.83±0.05 and for VA 0.84±0.07. AA and VA yield very similar results for spectral EEG analysis and are equally reliable. AA is less time-consuming, completely standardized, and independent of raters and their training. Automated processing of EEG facilitates workflow in quantitative EEG analysis. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Ruiz-Capillas, C; Triki, M; Herrero, A M; Rodriguez-Salas, L; Jiménez-Colmenero, F
2012-10-01
The effect of replacing animal fat (0%, 50% and 80% of pork backfat) by an equal proportion of konjac gel, on processing and quality characteristics of reduced and low-fat dry fermented sausage was studied. Weight loss, pH, and water activity of the sausage were affected (P<0.05) by fat reduction and processing time. Low lipid oxidation levels were observed during processing time irrespective of the dry sausage formulation. The fat content for normal-fat (NF), reduced-fat (RF) and low-fat (LF) sausages was 29.96%, 19.69% and 13.79%, respectively. This means an energy reduction of about 14.8% for RF and 24.5% for LF. As the fat content decreases there is an increase (P<0.05) in hardness and chewiness and a decrease (P<0.05) in cohesiveness. No differences were appreciated (P>0.05) in the presence of microorganisms as a result of the reformulation. The sensory panel considered that NF and RF products had acceptable sensory characteristics. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Xiong, T P; Yan, L L; Zhou, F; Rehan, K; Liang, D F; Chen, L; Yang, W L; Ma, Z H; Feng, M; Vedral, V
2018-01-05
Most nonequilibrium processes in thermodynamics are quantified only by inequalities; however, the Jarzynski relation presents a remarkably simple and general equality relating nonequilibrium quantities with the equilibrium free energy, and this equality holds in both the classical and quantum regimes. We report a single-spin test and confirmation of the Jarzynski relation in the quantum regime using a single ultracold ^{40}Ca^{+} ion trapped in a harmonic potential, based on a general information-theoretic equality for a temporal evolution of the system sandwiched between two projective measurements. By considering both initially pure and mixed states, respectively, we verify, in an exact and fundamental fashion, the nonequilibrium quantum thermodynamics relevant to the mutual information and Jarzynski equality.
A practical radial basis function equalizer.
Lee, J; Beach, C; Tepedelenlioglu, N
1999-01-01
A radial basis function (RBF) equalizer design process has been developed in which the number of basis function centers used is substantially fewer than conventionally required. The reduction of centers is accomplished in two-steps. First an algorithm is used to select a reduced set of centers that lie close to the decision boundary. Then the centers in this reduced set are grouped, and an average position is chosen to represent each group. Channel order and delay, which are determining factors in setting the initial number of centers, are estimated from regression analysis. In simulation studies, an RBF equalizer with more than 2000-to-1 reduction in centers performed as well as the RBF equalizer without reduction in centers, and better than a conventional linear equalizer.
Akeroyd, Michael A
2004-08-01
The equalization stage in the equalization-cancellation model of binaural unmasking compensates for the interaural time delay (ITD) of a masking noise by introducing an opposite, internal delay [N. I. Durlach, in Foundations of Modern Auditory Theory, Vol. II., edited by J. V. Tobias (Academic, New York, 1972)]. Culling and Summerfield [J. Acoust. Soc. Am. 98, 785-797 (1995)] developed a multi-channel version of this model in which equalization was "free" to use the optimal delay in each channel. Two experiments were conducted to test if equalization was indeed free or if it was "restricted" to the same delay in all channels. One experiment measured binaural detection thresholds, using an adaptive procedure, for 1-, 5-, or 17-component tones against a broadband masking noise, in three binaural configurations (N0S180, N180S0, and N90S270). The thresholds for the 1-component stimuli were used to normalize the levels of each of the 5- and 17-component stimuli so that they were equally detectable. If equalization was restricted, then, for the 5- and 17-component stimuli, the N90S270 and N180S0 configurations would yield a greater threshold than the N0S180 configurations. No such difference was found. A subsequent experiment measured binaural detection thresholds, via psychometric functions, for a 2-component complex tone in the same three binaural configurations. Again, no differential effect of configuration was observed. An analytic model of the detection of a complex tone showed that the results were more consistent with free equalization than restricted equalization, although the size of the differences was found to depend on the shape of the psychometric function for detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmoker, J.W.
1984-11-01
Data indicate that porosity loss in subsurface carbonate rocks can be empirically represented by the power function, theta = a (TTI) /SUP b/ , where theta is regional porosity, TTI is Lopatin's time-temperature index of thermal maturity, the exponent, b, equals approximately -0.372, and the multiplier, a, is constant for a given data population but varies by an order of magnitude overall. Implications include the following. 1. The decrease of carbonate porosity by burial diagenesis is a maturation process depending exponentially on temperature and linearly on time. 2. The exponent, b, is essentially independent of the rock matrix, and maymore » reflect rate-limiting processes of diffusive transport. 3. The multiplying coefficient, a, incorporates the net effect on porosity of all depositional and diagenetic parameters. Within constraints, carbonate-porosity prediction appears possible on a regional measurement scale as a function of thermal maturity. Estimation of carbonate porosity at the time of hydrocarbon generation, migration, or trapping also appears possible.« less
Real-time detection of hazardous materials in air
NASA Astrophysics Data System (ADS)
Schechter, Israel; Schroeder, Hartmut; Kompa, Karl L.
1994-03-01
A new detection system has been developed for real-time analysis of organic compounds in ambient air. It is based on multiphoton ionization by an unfocused laser beam in a single parallel-plate device. Thus, the ionization volume can be relatively large. The amount of laser created ions is determined quantitatively from the induced total voltage drop between the biased plates (Q equals (Delta) V(DOT)C). Mass information is obtained from computer analysis of the time-dependent signal. When a KrF laser (5 ev) is used, most of the organic compounds can be ionized in a two-photon process, but none of the standard components of atmospheric air are ionized by this process. Therefore, this instrument may be developed as a `sniffer' for organic materials. The method has been applied for benzene analysis in air. The detection limit is about 10 ppb. With a simple preconcentration technique the detection limit can be decreased to the sub-ppb range. Simple binary mixtures are also resolved.
26 CFR 26.2642-6 - Qualified severance.
Code of Federal Regulations, 2011 CFR
2011-04-01
... from time to time, in equal or unequal shares, for the benefit of any one or more members of the group... percentage may be determined by means of a formula (for example, that fraction of the trust the numerator of which is equal to the transferor's unused GST tax exemption, and the denominator of which is the fair...
29 CFR 1620.17 - Jobs requiring equal responsibility in performance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... justify a wage rate differential between the man's and woman's job if the equal pay provisions otherwise... the higher rate to both men and women who are called upon from time to time to assume such supervisory... either a man or a woman) is authorized and required to determine whether to accept payment for purchases...
29 CFR 1620.17 - Jobs requiring equal responsibility in performance.
Code of Federal Regulations, 2012 CFR
2012-07-01
... justify a wage rate differential between the man's and woman's job if the equal pay provisions otherwise... the higher rate to both men and women who are called upon from time to time to assume such supervisory... either a man or a woman) is authorized and required to determine whether to accept payment for purchases...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-18
... Schedule 46. a. For step one, define the terms ``Hourly Real-Time RSG MWP'' and ``Resource CMC Real-time... RSG credits and the difference between one and the Constraint Management Charge Allocation Factor... and Headroom Need is (1) less than or equal to zero, (2) greater than or equal to the Economic...
Equal Pay Act: Wage Differentials for Time of Day Worked
ERIC Educational Resources Information Center
Stanford, Richard Alan
1974-01-01
The Supreme Court held in Corning Glass Works cases involving male only employees for night shifts that the time of day worked could constitute a factor other than sex whereby the wage differential might qualify as an exception under the Equal Pay Act. Shift differentials could be legal if proven to be nondiscriminatory. (LBH)
Holistic processing and reliance on global viewing strategies in older adults' face perception.
Meinhardt-Injac, Bozana; Persike, Malte; Meinhardt, Günter
2014-09-01
There is increasing evidence that face recognition might be impaired in older adults, but it is unclear whether the impairment is truly perceptual, and face specific. In order to address this question we compared performance in same/different matching tasks with face and non-face objects (watches) among young (mean age 23.7) and older adults (mean age 70.4) using a context congruency paradigm (Meinhardt-Injac, Persike & Meinhardt, 2010, Meinhardt-Injac, Persike and Meinhardt, 2011a). Older adults were less accurate than young adults with both object classes, while face matching was notably impaired. Effects of context congruency and inversion, measured as the hallmarks of holistic processing, were equally strong in both age groups, and were found only for faces, but not for watches. The face specific decline in older adults revealed deficits in handling internal facial features, while young adults matched external and internal features equally well. Comparison with non-face stimuli showed that this decline was face specific, and did not concern processing of object features in general. Taken together, the results indicate no age-related decline in the capabilities to process faces holistically. Rather, strong holistic effects, combined with a loss of precision in handling internal features indicate that older adults rely on global viewing strategies for faces. At the same time, access to the exact properties of inner face details becomes restricted. Copyright © 2014. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Huang, J. D.; Liu, J. J.; Chen, Q. X.; Mao, N.
2017-06-01
Against a background of heat-treatment operations in mould manufacturing, a two-stage flow-shop scheduling problem is described for minimizing makespan with parallel batch-processing machines and re-entrant jobs. The weights and release dates of jobs are non-identical, but job processing times are equal. A mixed-integer linear programming model is developed and tested with small-scale scenarios. Given that the problem is NP hard, three heuristic construction methods with polynomial complexity are proposed. The worst case of the new constructive heuristic is analysed in detail. A method for computing lower bounds is proposed to test heuristic performance. Heuristic efficiency is tested with sets of scenarios. Compared with the two improved heuristics, the performance of the new constructive heuristic is superior.
Xu, Tong; Shikhaliev, Polad M; Berenji, Gholam R; Tehranzadeh, Jamshid; Saremi, Farhood; Molloi, Sabee
2004-04-01
To evaluate the feasibility and performance of an x-ray beam equalization system for chest radiography using anthropomorphic phantoms. Area beam equalization involves the process of the initial unequalized image acquisition, attenuator thickness calculation, mask generation using a 16 x 16 piston array, and final equalized image acquisition. Chest radiographs of three different anthropomorphic phantoms were acquired with no beam equalization and equalization levels of 4.8, 11.3, and 21. Six radiologists evaluated the images by scoring them from 1-5 using 13 different criteria. The dose was calculated using the known attenuator material thickness and the mAs of the x-ray tube. The visibility of anatomic structures in the under-penetrated regions of the chest radiographs was shown to be significantly (P < .01) improved after beam equalization. An equalization level of 4.8 provided most of the improvements with moderate increases in patient dose and tube loading. Higher levels of beam equalization did not show much improvement in the visibility of anatomic structures in the under-penetrated regions. A moderate level of x-ray beam equalization in chest radiography is superior to both conventional radiographs and radiographs with high levels of beam equalization. X-ray beam equalization can significantly improve the visibility of anatomic structures in the under-penetrated regions while maintaining good image quality in the lung region.
Deep architecture neural network-based real-time image processing for image-guided radiotherapy.
Mori, Shinichiro
2017-08-01
To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Advanced digital signal processing for short-haul and access network
NASA Astrophysics Data System (ADS)
Zhang, Junwen; Yu, Jianjun; Chi, Nan
2016-02-01
Digital signal processing (DSP) has been proved to be a successful technology recently in high speed and high spectrum-efficiency optical short-haul and access network, which enables high performances based on digital equalizations and compensations. In this paper, we investigate advanced DSP at the transmitter and receiver side for signal pre-equalization and post-equalization in an optical access network. A novel DSP-based digital and optical pre-equalization scheme has been proposed for bandwidth-limited high speed short-distance communication system, which is based on the feedback of receiver-side adaptive equalizers, such as least-mean-squares (LMS) algorithm and constant or multi-modulus algorithms (CMA, MMA). Based on this scheme, we experimentally demonstrate 400GE on a single optical carrier based on the highest ETDM 120-GBaud PDM-PAM-4 signal, using one external modulator and coherent detection. A line rate of 480-Gb/s is achieved, which enables 20% forward-error correction (FEC) overhead to keep the 400-Gb/s net information rate. The performance after fiber transmission shows large margin for both short range and metro/regional networks. We also extend the advanced DSP for short haul optical access networks by using high order QAMs. We propose and demonstrate a high speed multi-band CAP-WDM-PON system on intensity modulation, direct detection and digital equalizations. A hybrid modified cascaded MMA post-equalization schemes are used to equalize the multi-band CAP-mQAM signals. Using this scheme, we successfully demonstrates 550Gb/s high capacity WDMPON system with 11 WDM channels, 55 sub-bands, and 10-Gb/s per user in the downstream over 40-km SMF.
Time operators in stroboscopic wave-packet basis and the time scales in tunneling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bokes, P.
2011-03-15
We demonstrate that the time operator that measures the time of arrival of a quantum particle into a chosen state can be defined as a self-adjoint quantum-mechanical operator using periodic boundary conditions and applied to wave functions in energy representation. The time becomes quantized into discrete eigenvalues; and the eigenstates of the time operator, i.e., the stroboscopic wave packets introduced recently [Phys. Rev. Lett. 101, 046402 (2008)], form an orthogonal system of states. The formalism provides simple physical interpretation of the time-measurement process and direct construction of normalized, positive definite probability distribution for the quantized values of the arrival time.more » The average value of the time is equal to the phase time but in general depends on the choice of zero time eigenstate, whereas the uncertainty of the average is related to the traversal time and is independent of this choice. The general formalism is applied to a particle tunneling through a resonant tunneling barrier in one dimension.« less
A stochastic evolution model for residue Insertion-Deletion Independent from Substitution.
Lèbre, Sophie; Michel, Christian J
2010-12-01
We develop here a new class of stochastic models of gene evolution based on residue Insertion-Deletion Independent from Substitution (IDIS). Indeed, in contrast to all existing evolution models, insertions and deletions are modeled here by a concept in population dynamics. Therefore, they are not only independent from each other, but also independent from the substitution process. After a separate stochastic analysis of the substitution and the insertion-deletion processes, we obtain a matrix differential equation combining these two processes defining the IDIS model. By deriving a general solution, we give an analytical expression of the residue occurrence probability at evolution time t as a function of a substitution rate matrix, an insertion rate vector, a deletion rate and an initial residue probability vector. Various mathematical properties of the IDIS model in relation with time t are derived: time scale, time step, time inversion and sequence length. Particular expressions of the nucleotide occurrence probability at time t are given for classical substitution rate matrices in various biological contexts: equal insertion rate, insertion-deletion only and substitution only. All these expressions can be directly used for biological evolutionary applications. The IDIS model shows a strongly different stochastic behavior from the classical substitution only model when compared on a gene dataset. Indeed, by considering three processes of residue insertion, deletion and substitution independently from each other, it allows a more realistic representation of gene evolution and opens new directions and applications in this research field. Copyright © 2010 Elsevier Ltd. All rights reserved.
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance
2017-01-01
This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529
Trust, cooperation, and equality: a psychological analysis of the formation of social capital.
Cozzolino, Philip J
2011-06-01
Research suggests that in modern Western culture there is a positive relationship between the equality of resources and the formation of trust and cooperation, two psychological components of social capital. Two studies elucidate the psychological processes underlying that relationship. Study 1 experimentally tested the influence of resource distributions on the formation of trust and intentions to cooperate; individuals receiving a deficit of resources and a surplus of resources evidenced lower levels of social capital (i.e., trust and cooperation) than did individuals receiving equal amounts. Analyses revealed the process was affective for deficit participants and cognitive for surplus participants. Study 2 provided suggestive support for the affective-model of equality and social capital using proxy variables in the 1996 General Social Survey data set. Results suggest support for a causal path of unequal resource distributions generating affective experiences and cognitive concerns of justice, which mediate disengagement and distrust of others. ©2010 The British Psychological Society.
Nature or petrochemistry?-biologically degradable materials.
Mecking, Stefan
2004-02-20
Naturally occurring polymers have been utilized for a long time as materials, however, their application as plastics has been restricted because of their limited thermoplastic processability. Recently, the microbial synthesis of polyesters directly from carbohydrate sources has attracted considerable attention. The industrial-scale production of poly(lactic acid) from lactic acid generated by fermentation now provides a renewable resources-based polyester as a commodity plastic for the first time. The biodegradability of a given material is independent of its origin, and biodegradable plastics can equally well be prepared from fossil fuel feedstocks. A consideration of the overall carbon dioxide emissions and consumption of non-renewable resources over the entire life-cycle of a product is not necessarily favorable for plastics based on renewable resources with current technology-in addition to the feedstocks for the synthesis of the polymer materials, the feedstock for generation of the overall energy required for production and processing is decisive.
The Stem Cell Laboratory: Design, Equipment, and Oversight
Wesselschmidt, Robin L.; Schwartz, Philip H.
2013-01-01
This chapter describes some of the major issues to be considered when setting up a laboratory for the culture of human pluripotent stem cells (hPSCs). The process of establishing a hPSC laboratory can be divided into two equally important parts. One is completely administrative and includes developing protocols, seeking approval, and establishing reporting processes and documentation. The other part of establishing a hPSC laboratory involves the physical plant and includes design, equipment and personnel. Proper planning of laboratory operations and proper design of the physical layout of the stem cell laboratory so that meets the scope of planned operations is a major undertaking, but the time spent upfront will pay long-term returns in operational efficiency and effectiveness. A well-planned, organized, and properly equipped laboratory supports research activities by increasing efficiency and reducing lost time and wasted resources. PMID:21822863
2017-04-01
ARL-TR-8006 ● Apr 2017 US Army Research Laboratory Quasi -Static and Dynamic Characterization of Equal Channel Angular Extrusion...originator. ARL-TR-8006 ● Apr 2017 US Army Research Laboratory Quasi -Static and Dynamic Characterization of Equal Channel Angular...April 2017 2. REPORT TYPE Technical Report 3. DATES COVERED (From - To) April 2015–January 2016 4. TITLE AND SUBTITLE Quasi -Static and Dynamic
26 CFR 301.6343-2 - Return of wrongfully levied upon property.
Code of Federal Regulations, 2011 CFR
2011-04-01
... IRS may return— (i) The specific property levied upon; (ii) An amount of money equal to the amount of money levied upon; or (iii) An amount of money equal to the amount of money received by the United... property, the property may be returned at any time. An amount equal to the amount of money levied upon or...
26 CFR 301.6343-2 - Return of wrongfully levied upon property.
Code of Federal Regulations, 2010 CFR
2010-04-01
... IRS may return— (i) The specific property levied upon; (ii) An amount of money equal to the amount of money levied upon; or (iii) An amount of money equal to the amount of money received by the United... property, the property may be returned at any time. An amount equal to the amount of money levied upon or...
Equal Pay: A Thirty-Five Year Perspective.
ERIC Educational Resources Information Center
Castro, Ida L.
Issued on the 35th anniversary of the signing of the Equal Pay Act (1963), this report is a historical analysis of the economic trends affecting women workers from the years leading up to passage of the act through the present. It is divided into three time periods to highlight important developments: Part I--The Early Impact of the Equal Pay Act,…
Equal channel angular pressing (ECAP) and forging of commercially pure titanium (CP-Ti)
NASA Astrophysics Data System (ADS)
Krystian, Maciej; Huber, Daniel; Horky, Jelena
2017-10-01
Pure titanium with ultra-fine grained (UFG) microstructure is an exceptionally interesting material for biomedical and dental applications due to its very good biocompatibility and high strength. Such bulk, high-strength UFG materials are commonly produced by different Severe Plastic Deformation (SPD) techniques, whereof Equal Channel Angular Pressing (ECAP) is the most commonly used one. In this investigation commercially pure (CP) titanium (grade 2) was processed by ECAP using a die with a channel diameter of 20mm and an intersection angle of 105°. Six passes using route B120 (in which the billet is rotated between subsequent passes by 120°) at a temperature of 400°C were performed leading to a substantial grain refinement and an increase of strength and hardness. Subsequently, a thermal treatment study on ECAP-processed samples at different temperatures and for different time periods was carried out revealing the stability limit for ECAP CP-Ti as well as the best conditions leading to an improvement in both, strength and ductility. Furthermore, room temperature forging of the as-received (AR; hot-rolled and annealed) as well as ECAP-processed material was conducted. Tensile tests and hardness mappings revealed that forging is capable to further increase the strength of ECAP CP-Ti by more than 20%. Moreover, the mechanical properties are significantly more homogenous than after forging only.
In-line Kevlar filters for microfiltration of transuranic-containing liquid streams.
Gonzales, G J; Beddingfield, D H; Lieberman, J L; Curtis, J M; Ficklin, A C
1992-06-01
The Department of Energy Rocky Flats Plant has numerous ongoing efforts to minimize the generation of residue and waste and to improve safety and health. Spent polypropylene liquid filters held for plutonium recovery, known as "residue," or as transuranic mixed waste contribute to storage capacity problems and create radiation safety and health considerations. An in-line process-liquid filter made of Kevlar polymer fiber has been evaluated for its potential to: (1) minimize filter residue, (2) recover economically viable quantities of plutonium, (3) minimize liquid storage tank and process-stream radioactivity, and (4) reduce potential personnel radiation exposure associated with these sources. Kevlar filters were rated to less than or equal to 1 mu nominal filtration and are capable of reducing undissolved plutonium particles to more than 10 times below the economic discard limit, however produced high back-pressures and are not yet acid resistant. Kevlar filters performed independent of loaded particles serving as a sieve. Polypropylene filters removed molybdenum particles at efficiencies equal to Kevlar filters only after loading molybdenum during recirculation events. Kevlars' high-efficiency microfiltration of process-liquid streams for the removal of actinides has the potential to reduce personnel radiation exposure by a factor of 6 or greater, while simultaneously achieving a reduction in the generation of filter residue and waste by a factor of 7. Insoluble plutonium may be recoverable from Kevlar filters by incineration.
A fast optimization approach for treatment planning of volumetric modulated arc therapy.
Yan, Hui; Dai, Jian-Rong; Li, Ye-Xiong
2018-05-30
Volumetric modulated arc therapy (VMAT) is widely used in clinical practice. It not only significantly reduces treatment time, but also produces high-quality treatment plans. Current optimization approaches heavily rely on stochastic algorithms which are time-consuming and less repeatable. In this study, a novel approach is proposed to provide a high-efficient optimization algorithm for VMAT treatment planning. A progressive sampling strategy is employed for beam arrangement of VMAT planning. The initial beams with equal-space are added to the plan in a coarse sampling resolution. Fluence-map optimization and leaf-sequencing are performed for these beams. Then, the coefficients of fluence-maps optimization algorithm are adjusted according to the known fluence maps of these beams. In the next round the sampling resolution is doubled and more beams are added. This process continues until the total number of beams arrived. The performance of VMAT optimization algorithm was evaluated using three clinical cases and compared to those of a commercial planning system. The dosimetric quality of VMAT plans is equal to or better than the corresponding IMRT plans for three clinical cases. The maximum dose to critical organs is reduced considerably for VMAT plans comparing to those of IMRT plans, especially in the head and neck case. The total number of segments and monitor units are reduced for VMAT plans. For three clinical cases, VMAT optimization takes < 5 min accomplished using proposed approach and is 3-4 times less than that of the commercial system. The proposed VMAT optimization algorithm is able to produce high-quality VMAT plans efficiently and consistently. It presents a new way to accelerate current optimization process of VMAT planning.
Laser beam-plasma plume interaction during laser welding
NASA Astrophysics Data System (ADS)
Hoffman, Jacek; Moscicki, Tomasz; Szymanski, Zygmunt
2003-10-01
Laser welding process is unstable because the keyhole wall performs oscillations which results in the oscillations of plasma plume over the keyhole mouth. The characteristic frequencies are equal to 0.5-4 kHz. Since plasma plume absorbs and refracts laser radiation, plasma oscillations modulate the laser beam before it reaches the workpiece. In this work temporary electron densities and temperatures are determined in the peaks of plasma bursts during welding with a continuous wave CO2 laser. It has been found that during strong bursts the plasma plume over the keyhole consists of metal vapour only, being not diluted by the shielding gas. As expected the values of electron density are about two times higher in peaks than their time-averaged values. Since the plasma absorption coefficient scales as ~N2e/T3/2 (for CO2 laser radiation) the results show that the power of the laser beam reaching the metal surface is modulated by the plasma plume oscillations. The attenuation factor equals 4-6% of the laser power but it is expected that it is doubled by the refraction effect. The results, together with the analysis of the colour pictures from streak camera, allow also interpretation of the dynamics of the plasma plume.
NASA Technical Reports Server (NTRS)
Wilmoth, R. G.
1973-01-01
A molecular beam time-of-flight technique is studied as a means of determining surface stay times for physical adsorption. The experimental approach consists of pulsing a molecular beam, allowing the pulse to strike an adsorbing surface and detecting the molecular pulse after it has subsequently desorbed. The technique is also found to be useful for general studies of adsorption under nonequilibrium conditions including the study of adsorbate-adsorbate interactions. The shape of the detected pulse is analyzed in detail for a first-order desorption process. For mean stay times, tau, less than the mean molecular transit times involved, the peak of the detected pulse is delayed by an amount approximately equal to tau. For tau much greater than these transit times, the detected pulse should decay as exp(-t/tau). However, for stay times of the order of the transit times, both the molecular speed distributions and the incident pulse duration time must be taken into account.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Xinming; Lai Chaojen; Whitman, Gary J.
Purpose: The scan equalization digital mammography (SEDM) technique combines slot scanning and exposure equalization to improve low-contrast performance of digital mammography in dense tissue areas. In this study, full-field digital mammography (FFDM) images of an anthropomorphic breast phantom acquired with an anti-scatter grid at various exposure levels were superimposed to simulate SEDM images and investigate the improvement of low-contrast performance as quantified by primary signal-to-noise ratios (PSNRs). Methods: We imaged an anthropomorphic breast phantom (Gammex 169 ''Rachel,'' Gammex RMI, Middleton, WI) at various exposure levels using a FFDM system (Senographe 2000D, GE Medical Systems, Milwaukee, WI). The exposure equalization factorsmore » were computed based on a standard FFDM image acquired in the automatic exposure control (AEC) mode. The equalized image was simulated and constructed by superimposing a selected set of FFDM images acquired at 2, 1, 1/2, 1/4, 1/8, 1/16, and 1/32 times of exposure levels to the standard AEC timed technique (125 mAs) using the equalization factors computed for each region. Finally, the equalized image was renormalized regionally with the exposure equalization factors to result in an appearance similar to that with standard digital mammography. Two sets of FFDM images were acquired to allow for two identically, but independently, formed equalized images to be subtracted from each other to estimate the noise levels. Similarly, two identically but independently acquired standard FFDM images were subtracted to estimate the noise levels. Corrections were applied to remove the excess system noise accumulated during image superimposition in forming the equalized image. PSNRs over the compressed area of breast phantom were computed and used to quantitatively study the effects of exposure equalization on low-contrast performance in digital mammography. Results: We found that the highest achievable PSNR improvement factor was 1.89 for the anthropomorphic breast phantom used in this study. The overall PSNRs were measured to be 79.6 for the FFDM imaging and 107.6 for the simulated SEDM imaging on average in the compressed area of breast phantom, resulting in an average improvement of PSNR by {approx}35% with exposure equalization. We also found that the PSNRs appeared to be largely uniform with exposure equalization, and the standard deviations of PSNRs were estimated to be 10.3 and 7.9 for the FFDM imaging and the simulated SEDM imaging, respectively. The average glandular dose for SEDM was estimated to be 212.5 mrad, {approx}34% lower than that of standard AEC-timed FFDM (323.8 mrad) as a result of exposure equalization for the entire breast phantom. Conclusions: Exposure equalization was found to substantially improve image PSNRs in dense tissue regions and result in more uniform image PSNRs. This improvement may lead to better low-contrast performance in detecting and visualizing soft tissue masses and micro-calcifications in dense tissue areas for breast imaging tasks.« less
Plat, Rika; Lowie, Wander; de Bot, Kees
2017-01-01
Reaction time data have long been collected in order to gain insight into the underlying mechanisms involved in language processing. Means analyses often attempt to break down what factors relate to what portion of the total reaction time. From a dynamic systems theory perspective or an interaction dominant view of language processing, it is impossible to isolate discrete factors contributing to language processing, since these continually and interactively play a role. Non-linear analyses offer the tools to investigate the underlying process of language use in time, without having to isolate discrete factors. Patterns of variability in reaction time data may disclose the relative contribution of automatic (grapheme-to-phoneme conversion) processing and attention-demanding (semantic) processing. The presence of a fractal structure in the variability of a reaction time series indicates automaticity in the mental structures contributing to a task. A decorrelated pattern of variability will indicate a higher degree of attention-demanding processing. A focus on variability patterns allows us to examine the relative contribution of automatic and attention-demanding processing when a speaker is using the mother tongue (L1) or a second language (L2). A word naming task conducted in the L1 (Dutch) and L2 (English) shows L1 word processing to rely more on automatic spelling-to-sound conversion than L2 word processing. A word naming task with a semantic categorization subtask showed more reliance on attention-demanding semantic processing when using the L2. A comparison to L1 English data shows this was not only due to the amount of language use or language dominance, but also to the difference in orthographic depth between Dutch and English. An important implication of this finding is that when the same task is used to test and compare different languages, one cannot straightforwardly assume the same cognitive sub processes are involved to an equal degree using the same task in different languages.
MIMO signal progressing with RLSCMA algorithm for multi-mode multi-core optical transmission system
NASA Astrophysics Data System (ADS)
Bi, Yuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya
2018-01-01
In the process of transmitting signals of multi-mode multi-core fiber, there will be mode coupling between modes. The mode dispersion will also occur because each mode has different transmission speed in the link. Mode coupling and mode dispersion will cause damage to the useful signal in the transmission link, so the receiver needs to deal received signal with digital signal processing, and compensate the damage in the link. We first analyzes the influence of mode coupling and mode dispersion in the process of transmitting signals of multi-mode multi-core fiber, then presents the relationship between the coupling coefficient and dispersion coefficient. Then we carry out adaptive signal processing with MIMO equalizers based on recursive least squares constant modulus algorithm (RLSCMA). The MIMO equalization algorithm offers adaptive equalization taps according to the degree of crosstalk in cores or modes, which eliminates the interference among different modes and cores in space division multiplexing(SDM) transmission system. The simulation results show that the distorted signals are restored efficiently with fast convergence speed.
Software manual for operating particle displacement tracking data acquisition and reduction system
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1991-01-01
The software manual is presented. The necessary steps required to record, analyze, and reduce Particle Image Velocimetry (PIV) data using the Particle Displacement Tracking (PDT) technique are described. The new PDT system is an all electronic technique employing a CCD video camera and a large memory buffer frame-grabber board to record low velocity (less than or equal to 20 cm/s) flows. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine 2-D velocity vectors. All the PDT data acquisition, analysis, and data reduction software is written to run on an 80386 PC.
Parametric resonant triad interactions in a free shear layer
NASA Technical Reports Server (NTRS)
Mallier, R.; Maslowe, S. A.
1993-01-01
We investigate the weakly nonlinear evolution of a triad of nearly-neutral modes superimposed on a mixing layer with velocity profile u bar equals Um + tanh y. The perturbation consists of a plane wave and a pair of oblique waves each inclined at approximately 60 degrees to the mean flow direction. Because the evolution occurs on a relatively fast time scale, the critical layer dynamics dominate the process and the amplitude evolution of the oblique waves is governed by an integro-differential equation. The long-time solution of this equation predicts very rapid (exponential of an exponential) amplification and we discuss the pertinence of this result to vortex pairing phenomena in mixing layers.
Chaos control by electric current in an enzymatic reaction.
Lekebusch, A; Förster, A; Schneider, F W
1996-09-01
We apply the continuous delayed feedback method of Pyragas to control chaos in the enzymatic Peroxidase-Oxidase (PO) reaction, using the electric current as the control parameter. At each data point in the time series, a time delayed feedback function applies a small amplitude perturbation to inert platinum electrodes, which causes redox processes on the surface of the electrodes. These perturbations are calculated as the difference between the previous (time delayed) signal and the actual signal. Unstable periodic P1, 1(1), and 1(2) orbits (UPOs) were stabilized in the CSTR (continuous stirred tank reactor) experiments. The stabilization is demonstrated by at least three conditions: A minimum in the experimental dispersion function, the equality of the delay time with the period of the stabilized attractor and the embedment of the stabilized periodic attractor in the chaotic attractor.
NASA Astrophysics Data System (ADS)
Deng, Rui; Yu, Jianjun; He, Jing; Wei, Yiran
2018-05-01
In this paper, we experimentally demonstrated a complete real-time 4-level pulse amplitude modulation (PAM-4) Q-band radio-over-fiber (RoF) system with optical heterodyning and envelope detector (ED) down-conversion. Meanwhile, a cost-efficient real-time implementation scheme of cascaded multi-modulus algorithm (CMMA) equalization is proposed in this paper. By using the proposed scheme, the CMMA equalization is applied in the system for signal recovery. In addition, to improve the transmission performance of the system, an interleaved Reed-Solomon (RS) code is applied in the real-time system. Although there is serious power impulse noise in the system, the system can still achieve a bit error rate (BER) at below 1 × 10-7 after 25 km standard single mode fiber (SSMF) transmission and 1-m wireless transmission.
Gillard, Jonathan
2015-12-01
This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects. © The Author(s) 2011.
Space and time scales of shoreline change at Cape Cod National Seashore, MA, USA
Allen, J.R.; LaBash, C.L.; List, J.H.; Kraus, Nicholas C.; McDougal, William G.
1999-01-01
Different processes cause patterns of shoreline change which are exhibited at different magnitudes and nested into different spatial and time scale hierarchies. The 77-km outer beach at Cape Cod National Seashore offers one of the few U.S. federally owned portions of beach to study shoreline change within the full range of sediment source and sink relationships, and barely affected by human intervention. 'Mean trends' of shoreline changes are best observed at long time scales but contain much spatial variation thus many sites are not equal in response. Long-term, earlier-noted trends are confirmed but the added quantification and resolution improves greatly the understanding of appropriate spatial and time scales of those processes driving bluff retreat and barrier island changes in both north and south depocenters. Shorter timescales allow for comparison of trends and uncertainty in shoreline change at local scales but are dependent upon some measure of storm intensity and seasonal frequency. Single-event shoreline survey results for one storm at daily intervals after the erosional phase suggest a recovery time for the system of six days, identifies three sites with abnormally large change, and that responses at these sites are spatially coherent for now unknown reasons. Areas near inlets are the most variable at all time scales. Hierarchies in both process and form are suggested.
Two Balls' Collision of Mass Ratio 3:1
NASA Astrophysics Data System (ADS)
Ogawara, Yasuo; Hull, Michael M.
2018-04-01
Students will sometimes ask why momentum and kinetic energy concepts are both necessary. When physics teachers demonstrate situations that require both an understanding of kinetic energy and momentum, a favorite is Newton's cradle, or a comparable demonstration of two balls of equal mass hitting each other. However, in addition to the case of two balls of equal mass, if a ball hits another ball of three times the mass with equal speed, the results are also interesting, and, like the equal-mass demonstration, both kinetic energy and momentum are critical for understanding the motion.
Information Processing Capacity of Dynamical Systems
NASA Astrophysics Data System (ADS)
Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge
2012-07-01
Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory.
Information Processing Capacity of Dynamical Systems
Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge
2012-01-01
Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory. PMID:22816038
Unlikely Fluctuations and Non-Equilibrium Work Theorems-A Simple Example.
Muzikar, Paul
2016-06-30
An exciting development in statistical mechanics has been the elucidation of a series of surprising equalities involving the work done during a nonequilibrium process. Astumian has presented an elegant example of such an equality, involving a colloidal particle undergoing Brownian motion in the presence of gravity. We analyze this example; its simplicity, and its link to geometric Brownian motion, allows us to clarify the inner workings of the equality. Our analysis explicitly shows the important role played by large, unlikely fluctuations.
Selfsimilar time dependent shock structures
NASA Astrophysics Data System (ADS)
Beck, R.; Drury, L. O.
1985-08-01
Diffusive shock acceleration as an astrophysical mechanism for accelerating charged particles has the advantage of being highly efficient. This means however that the theory is of necessity nonlinear; the reaction of the accelerated particles on the shock structure and the acceleration process must be self-consistently included in any attempt to develop a complete theory of diffusive shock acceleration. Considerable effort has been invested in attempting, at least partially, to do this and it has become clear that in general either the maximum particle energy must be restricted by introducing additional loss processes into the problem or the acceleration must be treated as a time dependent problem (Drury, 1984). It is concluded that stationary modified shock structures can only exist for strong shocks if additional loss processes limit the maximum energy a particle can attain. This is certainly possible and if it occurs the energy loss from the shock will lead to much greater shock compressions. It is however equally possible that no such processes exist and we must then ask what sort of nonstationary shock structure develops. The ame argument which excludes stationary structures also rules out periodic solutions and indeed any solution where the width of the shock remains bounded. It follows that the width of the shock must increase secularly with time and it is natural to examine the possibility of selfsimilar time dependent solutions.
Selfsimilar time dependent shock structures
NASA Technical Reports Server (NTRS)
Beck, R.; Drury, L. O.
1985-01-01
Diffusive shock acceleration as an astrophysical mechanism for accelerating charged particles has the advantage of being highly efficient. This means however that the theory is of necessity nonlinear; the reaction of the accelerated particles on the shock structure and the acceleration process must be self-consistently included in any attempt to develop a complete theory of diffusive shock acceleration. Considerable effort has been invested in attempting, at least partially, to do this and it has become clear that in general either the maximum particle energy must be restricted by introducing additional loss processes into the problem or the acceleration must be treated as a time dependent problem (Drury, 1984). It is concluded that stationary modified shock structures can only exist for strong shocks if additional loss processes limit the maximum energy a particle can attain. This is certainly possible and if it occurs the energy loss from the shock will lead to much greater shock compressions. It is however equally possible that no such processes exist and we must then ask what sort of nonstationary shock structure develops. The ame argument which excludes stationary structures also rules out periodic solutions and indeed any solution where the width of the shock remains bounded. It follows that the width of the shock must increase secularly with time and it is natural to examine the possibility of selfsimilar time dependent solutions.
Bürger, Raimund; Diehl, Stefan; Mejías, Camilo
2016-01-01
The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.
Derivative of Area Equals Perimeter--Coincidence or Rule?
ERIC Educational Resources Information Center
Zazkis, Rina; Sinitsky, Ilya; Leikin, Roza
2013-01-01
Why is the derivative of the area of a circle equal to its circumference? Why is the derivative of the volume of a sphere equal to its surface area? And why does a similar relationship not hold for a square or a cube? Or does it? In their work in teacher education, these authors have heard at times undesirable responses to these questions:…
Pandey, Anil Kumar; Sharma, Param Dev; Dheer, Pankaj; Parida, Girish Kumar; Goyal, Harish; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
99m Technetium-methylene diphosphonate ( 99m Tc-MDP) bone scan images have limited number of counts per pixel, and hence, they have inferior image quality compared to X-rays. Theoretically, global histogram equalization (GHE) technique can improve the contrast of a given image though practical benefits of doing so have only limited acceptance. In this study, we have investigated the effect of GHE technique for 99m Tc-MDP-bone scan images. A set of 89 low contrast 99m Tc-MDP whole-body bone scan images were included in this study. These images were acquired with parallel hole collimation on Symbia E gamma camera. The images were then processed with histogram equalization technique. The image quality of input and processed images were reviewed by two nuclear medicine physicians on a 5-point scale where score of 1 is for very poor and 5 is for the best image quality. A statistical test was applied to find the significance of difference between the mean scores assigned to input and processed images. This technique improves the contrast of the images; however, oversaturation was noticed in the processed images. Student's t -test was applied, and a statistically significant difference in the input and processed image quality was found at P < 0.001 (with α = 0.05). However, further improvement in image quality is needed as per requirements of nuclear medicine physicians. GHE techniques can be used on low contrast bone scan images. In some of the cases, a histogram equalization technique in combination with some other postprocessing technique is useful.
Concentration of sunlight to solar-surface levels using non-imaging optics
NASA Astrophysics Data System (ADS)
Gleckman, Philip; O'Gallagher, Joseph; Winston, Roland
1989-05-01
An account is given of the design and operational principles of a solar concentrator that employs nonimaging optics to achieve a solar flux equal to 56,000 times that of ambient sunlight, yielding temperatures comparable to, and with further development of the device, exceeding those of the solar surface. In this scheme, a parabolic mirror primary concentrator is followed by a secondary concentrator, designed according to the edge-ray method, which is filled with a transparent oil. The device may be used in materials-processing, waste-disposal, and solar-pumped laser applications.
NASA Astrophysics Data System (ADS)
Wang, Yao; Vijaya Kumar, B. V. K.
2017-05-01
The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.
Evolution of haploid-diploid life cycles when haploid and diploid fitnesses are not equal.
Scott, Michael F; Rescan, Marie
2017-02-01
Many organisms spend a significant portion of their life cycle as haploids and as diploids (a haploid-diploid life cycle). However, the evolutionary processes that could maintain this sort of life cycle are unclear. Most previous models of ploidy evolution have assumed that the fitness effects of new mutations are equal in haploids and homozygous diploids, however, this equivalency is not supported by empirical data. With different mutational effects, the overall (intrinsic) fitness of a haploid would not be equal to that of a diploid after a series of substitution events. Intrinsic fitness differences between haploids and diploids can also arise directly, for example because diploids tend to have larger cell sizes than haploids. Here, we incorporate intrinsic fitness differences into genetic models for the evolution of time spent in the haploid versus diploid phases, in which ploidy affects whether new mutations are masked. Life-cycle evolution can be affected by intrinsic fitness differences between phases, the masking of mutations, or a combination of both. We find parameter ranges where these two selective forces act and show that the balance between them can favor convergence on a haploid-diploid life cycle, which is not observed in the absence of intrinsic fitness differences. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
Successive equimarginal approach for optimal design of a pump and treat system
NASA Astrophysics Data System (ADS)
Guo, Xiaoniu; Zhang, Chuan-Mian; Borthwick, John C.
2007-08-01
An economic concept-based optimization method is developed for groundwater remediation design. Design of a pump and treat (P&T) system is viewed as a resource allocation problem constrained by specified cleanup criteria. An optimal allocation of resources requires that the equimarginal principle, a fundamental economic principle, must hold. The proposed method is named successive equimarginal approach (SEA), which continuously shifts a pumping rate from a less effective well to a more effective one until equal marginal productivity for all units is reached. Through the successive process, the solution evenly approaches the multiple inequality constraints that represent the specified cleanup criteria in space and in time. The goal is to design an equal protection system so that the distributed contaminant plumes can be equally contained without bypass and overprotection is minimized. SEA is a hybrid of the gradient-based method and the deterministic heuristics-based method, which allows flexibility in dealing with multiple inequality constraints without using a penalty function and in balancing computational efficiency with robustness. This method was applied to design a large-scale P&T system for containment of multiple plumes at the former Blaine Naval Ammunition Depot (NAD) site, near Hastings, Nebraska. To evaluate this method, the SEA results were also compared with those using genetic algorithms.
Secretarial Administraton: The Interviewing Process.
ERIC Educational Resources Information Center
Nemesh, Anna
1979-01-01
Suggests classroom techniques to prepare business students for employment interviews and gives information on lawful and unlawful employment interview inquiries, as well as some fair employment legal requirements of the Equal Employment Opportunity Act of 1974, Civil Rights Act of 1964, Equal Pay Act of 1963, and Rehabilitation Act of 1973. (MF)
Equal Employment Opportunity and ADA Implications of Screening and Selection.
ERIC Educational Resources Information Center
Norton, Steven D.; Hundley, John R.
1995-01-01
The process of screening and selecting new employees is viewed as one having discrete steps, each with implications concerning Equal Employment Opportunity and the Americans with Disabilities Act. Several screening and selection methods are examined, with questionnaire forms used by Indiana University, South Bend provided for illustration. Typical…
NASA Astrophysics Data System (ADS)
Wijayanto, D.; Kurohman, F.; Nugroho, RA
2018-03-01
The research purpose was to develop a model bioeconomic of profit maximization that can be applied to red tilapia culture. The development of fish growth model used polynomial growth function. Profit maximization process used the first derivative of profit equation to time of culture equal to zero. This research has also developed the equations to estimate the culture time to reach the target size of the fish harvest. The research proved that this research model could be applied in the red tilapia culture. In the case of this study, red tilapia culture can achieve the maximum profit at 584 days and the profit of Rp. 28,605,731 per culture cycle. If used size target of 250 g, the culture of red tilapia need 82 days of culture time.
The newborn oxygram: automated processing of transcutaneous oxygen data.
Horbar, J D; Clark, J T; Lucey, J F
1980-12-01
Hypoxemic and hyperoxemic episodes are common in newborns with respiratory disorders. We have developed a microprocessor-based data system for use with transcutaneous oxygen (TcPO2) monitors in an attempt to quantitate these episodes. The amount of time spent by an infant in each of ten preset TcPO2 ranges can be automatically recorded. These data are referred to as the oxygram. Fourteen newborn infants were monitored for a total of 552 hours using this system. They spent a mean of 2.96% of the time with a TcPO2 less than or equal to 40 torr and 0.26% of the time with a TcPO2 greater than 100 torr. Representative oxygrams are presented. Clinical and research applications of the data system are discussed.
In conflict with ourselves? An investigation of heuristic and analytic processes in decision making.
Bonner, Carissa; Newell, Ben R
2010-03-01
Many theorists propose two types of processing: heuristic and analytic. In conflict tasks, in which these processing types lead to opposing responses, giving the analytic response may require both detection and resolution of the conflict. The ratio bias task, in which people tend to treat larger numbered ratios (e.g., 20/100) as indicating a higher likelihood of winning than do equivalent smaller numbered ratios (e.g., 2/10), is considered to induce such a conflict. Experiment 1 showed response time differences associated with conflict detection, resolution, and the amount of conflict induced. The conflict detection and resolution effects were replicated in Experiment 2 and were not affected by decreasing the influence of the heuristic response or decreasing the capacity to make the analytic response. The results are consistent with dual-process accounts, but a single-process account in which quantitative, rather than qualitative, differences in processing are assumed fares equally well in explaining the data.
Concurrent schedules: Effects of time- and response-allocation constraints
Davison, Michael
1991-01-01
Five pigeons were trained on concurrent variable-interval schedules arranged on two keys. In Part 1 of the experiment, the subjects responded under no constraints, and the ratios of reinforcers obtainable were varied over five levels. In Part 2, the conditions of the experiment were changed such that the time spent responding on the left key before a subsequent changeover to the right key determined the minimum time that must be spent responding on the right key before a changeover to the left key could occur. When the left key provided a higher reinforcer rate than the right key, this procedure ensured that the time allocated to the two keys was approximately equal. The data showed that such a time-allocation constraint only marginally constrained response allocation. In Part 3, the numbers of responses emitted on the left key before a changeover to the right key determined the minimum number of responses that had to be emitted on the right key before a changeover to the left key could occur. This response constraint completely constrained time allocation. These data are consistent with the view that response allocation is a fundamental process (and time allocation a derivative process), or that response and time allocation are independently controlled, in concurrent-schedule performance. PMID:16812632
Universal Rim Thickness in Unsteady Sheet Fragmentation.
Wang, Y; Dandekar, R; Bustos, N; Poulain, S; Bourouiba, L
2018-05-18
Unsteady fragmentation of a fluid bulk into droplets is important for epidemiology as it governs the transport of pathogens from sneezes and coughs, or from contaminated crops in agriculture. It is also ubiquitous in industrial processes such as paint, coating, and combustion. Unsteady fragmentation is distinct from steady fragmentation on which most theoretical efforts have been focused thus far. We address this gap by studying a canonical unsteady fragmentation process: the breakup from a drop impact on a finite surface where the drop fluid is transferred to a free expanding sheet of time-varying properties and bounded by a rim of time-varying thickness. The continuous rim destabilization selects the final spray droplets, yet this process remains poorly understood. We combine theory with advanced image analysis to study the unsteady rim destabilization. We show that, at all times, the rim thickness is governed by a local instantaneous Bond number equal to unity, defined with the instantaneous, local, unsteady rim acceleration. This criterion is found to be robust and universal for a family of unsteady inviscid fluid sheet fragmentation phenomena, from impacts of drops on various surface geometries to impacts on films. We discuss under which viscous and viscoelastic conditions the criterion continues to govern the unsteady rim thickness.
Universal Rim Thickness in Unsteady Sheet Fragmentation
NASA Astrophysics Data System (ADS)
Wang, Y.; Dandekar, R.; Bustos, N.; Poulain, S.; Bourouiba, L.
2018-05-01
Unsteady fragmentation of a fluid bulk into droplets is important for epidemiology as it governs the transport of pathogens from sneezes and coughs, or from contaminated crops in agriculture. It is also ubiquitous in industrial processes such as paint, coating, and combustion. Unsteady fragmentation is distinct from steady fragmentation on which most theoretical efforts have been focused thus far. We address this gap by studying a canonical unsteady fragmentation process: the breakup from a drop impact on a finite surface where the drop fluid is transferred to a free expanding sheet of time-varying properties and bounded by a rim of time-varying thickness. The continuous rim destabilization selects the final spray droplets, yet this process remains poorly understood. We combine theory with advanced image analysis to study the unsteady rim destabilization. We show that, at all times, the rim thickness is governed by a local instantaneous Bond number equal to unity, defined with the instantaneous, local, unsteady rim acceleration. This criterion is found to be robust and universal for a family of unsteady inviscid fluid sheet fragmentation phenomena, from impacts of drops on various surface geometries to impacts on films. We discuss under which viscous and viscoelastic conditions the criterion continues to govern the unsteady rim thickness.
Peter, Emanuel K; Pivkin, Igor V; Shea, Joan-Emma
2015-04-14
In Monte-Carlo simulations of protein folding, pathways and folding times depend on the appropriate choice of the Monte-Carlo move or process path. We developed a generalized set of process paths for a hybrid kinetic Monte Carlo-Molecular dynamics algorithm, which makes use of a novel constant time-update and allows formation of α-helical and β-stranded secondary structures. We apply our new algorithm to the folding of 3 different proteins: TrpCage, GB1, and TrpZip4. All three systems are seen to fold within the range of the experimental folding times. For the β-hairpins, we observe that loop formation is the rate-determining process followed by collapse and formation of the native core. Cluster analysis of both peptides reveals that GB1 folds with equal likelihood along a zipper or a hydrophobic collapse mechanism, while TrpZip4 follows primarily a zipper pathway. The difference observed in the folding behavior of the two proteins can be attributed to the different arrangements of their hydrophobic core, strongly packed, and dry in case of TrpZip4, and partially hydrated in the case of GB1.
2012-01-01
Advanced oxidation processes like Fenton and photo-Fenton have been effectively applied to oxidize the persistent organic compounds in solid waste leachate and convert them to unharmful materials and products. However, there are limited data about application of Fenton-like process in leachate treatment. Therefore, this study was designed with the objective of treating municipal landfill leachate by Fenton, Fenton-like and photo–Fenton processes to determine the effect of different variables, by setting up a pilot system. The used leachate was collected from a municipal unsanitary landfill in Qaem-Shahr in the north of Iran. Fenton and Fenton-like processes were conducted by Jar-test method. Photo-Fenton process was performed in a glass photo-reactor. In all processes, H2O2 was used as the oxidant. FeSO4.7H2O and FeCl3.6H2O were used as reagents. All parameters were measured based on standard methods. The results showed that the optimum concentration of H2O2 was equal to 5 g/L for the Fenton-like process and 3 g/L for the Fenton and photo-Fenton processes. The optimum ratio of H2O2: Fe+2/Fe+3 were equal to 8:1 in all processes. At optimum conditions, the amount of COD removal was 69.6%, 65.9% and 83.2% in Fenton, Fenton-like and photo–Fenton processes, respectively. In addition, optimum pH were 3, 5 and 3 and the optimum contact time were 150, 90 and 120 minutes, for Fenton, Fenton-like and photo–Fenton processes, respectively. After all processes, the biodegradability (BOD5/COD ratio) of the treated leachate was increased compared to that of the raw leachate and the highest increase in BOD5/COD ratio was observed in the photo-Fenton process. The efficiency of the Fenton-like process was overally less than Fenton and photo-Fenton processes, meanwhile the Fenton-like process was at higher pH and did not show problems. PMID:23369204
Liao, Bolin; Zhang, Yunong; Jin, Long
2016-02-01
In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.
Hue-preserving and saturation-improved color histogram equalization algorithm.
Song, Ki Sun; Kang, Hee; Kang, Moon Gi
2016-06-01
In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.
TIME CALIBRATED OSCILLOSCOPE SWEEP
Owren, H.M.; Johnson, B.M.; Smith, V.L.
1958-04-22
The time calibrator of an electric signal displayed on an oscilloscope is described. In contrast to the conventional technique of using time-calibrated divisions on the face of the oscilloscope, this invention provides means for directly superimposing equal time spaced markers upon a signal displayed upon an oscilloscope. More explicitly, the present invention includes generally a generator for developing a linear saw-tooth voltage and a circuit for combining a high-frequency sinusoidal voltage of a suitable amplitude and frequency with the saw-tooth voltage to produce a resultant sweep deflection voltage having a wave shape which is substantially linear with respect to time between equal time spaced incremental plateau regions occurring once each cycle of the sinusoidal voltage. The foregoing sweep voltage when applied to the horizontal deflection plates in combination with a signal to be observed applied to the vertical deflection plates of a cathode ray oscilloscope produces an image on the viewing screen which is essentially a display of the signal to be observed with respect to time. Intensified spots, or certain other conspicuous indications corresponding to the equal time spaced plateau regions of said sweep voltage, appear superimposed upon said displayed signal, which indications are therefore suitable for direct time calibration purposes.
A CMOS merged CDR and continuous-time adaptive equalizer
NASA Astrophysics Data System (ADS)
Sánchez-Azqueta, C.; Aguirre, J.; Gimeno, C.; Aldea, C.; Celma, S.
2015-06-01
We present a low-voltage merged CDR and cntinuous-time adaptive equalizer capable to compensate the attenu- ation of a SI-POF channel while at the same time synchronizing and regenerating the incoming signal in a single stage. The system operates at 1.25 Gbps for NRZ modulation through a 50-m SI-POF channel and it is designed in standard 0.18-μm CMOS fed at 1 V with a power consumption of 43.4 mW.
Waking and scrambling in holographic heating up
NASA Astrophysics Data System (ADS)
Ageev, D. S.; Aref'eva, I. Ya.
2017-10-01
Using holographic methods, we study the heating up process in quantum field theory. As a holographic dual of this process, we use absorption of a thin shell on a black brane. We find the explicit form of the time evolution of the quantum mutual information during heating up from the temperature Ti to the temperature T f in a system of two intervals in two-dimensional space-time. We determine the geometric characteristics of the system under which the time dependence of the mutual information has a bell shape: it is equal to zero at the initial instant, becomes positive at some subsequent instant, further attains its maximum, and again decreases to zero. Such a behavior of the mutual information occurs in the process of photosynthesis. We show that if the distance x between the intervals is less than log 2/2π T i, then the evolution of the holographic mutual information has a bell shape only for intervals whose lengths are bounded from above and below. For sufficiently large x, i.e., for x < log 2/2π T i, the bell-like shape of the time dependence of the quantum mutual information is present only for sufficiently large intervals. Moreover, the zone narrows as T i increases and widens as T f increases.
26 CFR 1.684-1 - Recognition of gain on transfers to certain foreign trusts and estates.
Code of Federal Regulations, 2012 CFR
2012-04-01
... required to recognize gain at the time of the transfer equal to the excess of the fair market value of the...) of this section, A recognizes gain at the time of the transfer equal to 800X. Example 4. Exchange of... 26 Internal Revenue 8 2012-04-01 2012-04-01 false Recognition of gain on transfers to certain...
26 CFR 1.684-1 - Recognition of gain on transfers to certain foreign trusts and estates.
Code of Federal Regulations, 2013 CFR
2013-04-01
... required to recognize gain at the time of the transfer equal to the excess of the fair market value of the...) of this section, A recognizes gain at the time of the transfer equal to 800X. Example 4. Exchange of... 26 Internal Revenue 8 2013-04-01 2013-04-01 false Recognition of gain on transfers to certain...
Zhang, Junwen; Yu, Jianjun; Chi, Nan; Chien, Hung-Chang
2014-08-25
We theoretically and experimentally investigate a time-domain digital pre-equalization (DPEQ) scheme for bandwidth-limited optical coherent communication systems, which is based on feedback of channel characteristics from the receiver-side blind and adaptive equalizers, such as least-mean-squares (LMS) algorithm and constant or multi- modulus algorithms (CMA, MMA). Based on the proposed DPEQ scheme, we theoretically and experimentally study its performance in terms of various channel conditions as well as resolutions for channel estimation, such as filtering bandwidth, taps length, and OSNR. Using a high speed 64-GSa/s DAC in cooperation with the proposed DPEQ technique, we successfully synthesized band-limited 40-Gbaud signals in modulation formats of polarization-diversion multiplexed (PDM) quadrature phase shift keying (QPSK), 8-quadrature amplitude modulation (QAM) and 16-QAM, and significant improvement in both back-to-back and transmission BER performances are also demonstrated.
NASA Astrophysics Data System (ADS)
Quang Tran, Danh; Li, Jin; Xuan, Fuzhen; Xiao, Ting
2018-06-01
Dielectric elastomers (DEs) are belonged to a group of polymers which cause a time-dependence deformation due to the effect of viscoelastic. In recent years, viscoelasticity has been accounted in the modeling in order to understand the complete electromechanical behavior of dielectric elastomer actuators (DEAs). In this paper, we investigate the actuation performance of a circular DEA under different equal, un-equal biaxial pre-stretches, based on a nonlinear rheological model. The theoretical results are validated by experiments, which verify the electromechanical constitutive equation of the DEs. The viscoelastic mechanical characteristic is analyzed by modeling simulation analysis and experimental to describe the influence of frequency, voltage, pre-stretch, and waveform on the actuation response of the actuator. Our study indicates that: The DEA with different equal or un-equal biaxial pre-stretches undergoes different actuation performance when subject to high voltage. Under an un-equal biaxial pre-stretch, the DEA deforms unequally and shows different deformation abilities in two directions. The relative creep strain behavior of the DEA due to the effect of viscoelasticity can be reduced by increasing pre-stretch ratio. Higher equal biaxial pre-stretch obtains larger deformation strain, improves actuation response time, and reduces the drifting of the equilibrium position in the dynamic response of the DEA when activated by step and period voltage, while increasing the frequency will inhibit the output stretch amplitude. The results in this paper can provide theoretical guidance and application reference for design and control of the viscoelastic DEAs.
Design of production process main shaft process with lean manufacturing to improve productivity
NASA Astrophysics Data System (ADS)
Siregar, I.; Nasution, A. A.; Andayani, U.; Anizar; Syahputri, K.
2018-02-01
This object research is one of manufacturing companies that produce oil palm machinery parts. In the production process there is delay in the completion of the Main shaft order. Delays in the completion of the order indicate the low productivity of the company in terms of resource utilization. This study aimed to obtain a draft improvement of production processes that can improve productivity by identifying and eliminating activities that do not add value (non-value added activity). One approach that can be used to reduce and eliminate non-value added activity is Lean Manufacturing. This study focuses on the identification of non-value added activity with value stream mapping analysis tools, while the elimination of non-value added activity is done with tools 5 whys and implementation of pull demand system. Based on the research known that non-value added activity on the production process of the main shaft is 9,509.51 minutes of total lead time 10,804.59 minutes. This shows the level of efficiency (Process Cycle Efficiency) in the production process of the main shaft is still very low by 11.89%. Estimation results of improvement showed a decrease in total lead time became 4,355.08 minutes and greater process cycle efficiency that is equal to 29.73%, which indicates that the process was nearing the concept of lean production.
Sleep-related memory consolidation in primary insomnia.
Nissen, Christoph; Kloepfer, Corinna; Feige, Bernd; Piosczyk, Hannah; Spiegelhalder, Kai; Voderholzer, Ulrich; Riemann, Dieter
2011-03-01
It has been suggested that healthy sleep facilitates the consolidation of newly acquired memories and underlying brain plasticity. The authors tested the hypothesis that patients with primary insomnia (PI) would show deficits in sleep-related memory consolidation compared to good sleeper controls (GSC). The study used a four-group parallel design (n=86) to investigate the effects of 12 h of night-time, including polysomnographically monitored sleep ('sleep condition' in PI and GSC), versus 12 h of daytime wakefulness ('wake condition' in PI and GSC) on procedural (mirror tracing task) and declarative memory consolidation (visual and verbal learning task). Demographic characteristics and memory encoding did not differ between the groups at baseline. Polysomnography revealed a significantly disturbed sleep profile in PI compared to GSC in the sleep condition. Night-time periods including sleep in GSC were associated with (i) a significantly enhanced procedural and declarative verbal memory consolidation compared to equal periods of daytime wakefulness in GSC and (ii) a significantly enhanced procedural memory consolidation compared to equal periods of daytime wakefulness and night-time sleep in PI. Across retention intervals of daytime wakefulness, no differences between the experimental groups were observed. This pattern of results suggests that healthy sleep fosters the consolidation of new memories, and that this process is impaired for procedural memories in patients with PI. Future work is needed to investigate the impact of treatment on improving sleep and memory. © 2010 European Sleep Research Society.
First results from the energetic particle instrument on the OEDIPUS-C sounding rocket
NASA Astrophysics Data System (ADS)
Gough, M. P.; Hardy, D. A.; James, H. G.
The Canadian / US OEDIPUS-C rocket was flown from the Poker Flat Rocket Range November 6th 1995 as a mother-son sounding rocket. It was designed to study auroral ionospheric plasma physics using active wave sounding and prove tether technology. The payload separated into two sections reaching a separation of 1200m along the Earth's magnetic field. One section included a frequency stepped HF transmitter and the other included a synchronised HF receiver. Both sections included Energetic Particle Instruments, EPI, stepped in energy synchronously with the transmitter steps. On-board EPI particle processing in both payloads provided direct measurements of electron heating, wave-particle interactions via particle correlators, and a high resolution measurement of wave induced particle heating via transmitter synchronised fast sampling. Strong electron heating was observed at times when the HF transmitter frequency was equal to a harmonic of the electron gyrofrequency, f_ce, or equal to the upper hybrid frequency, f_uh.
ERIC Educational Resources Information Center
NCRIEEO Newsletter, 1972
1972-01-01
The Equal Educational Opportunity Workshop for Human Rights Workers focused on the theme "Equal Educational Opportunity--What Does It Mean to the Human Rights Worker? A Deep Examination of Professional Commitment." Most school systems and educational institutions have human rights specialists devoting staff time and resources to race and…
Regulatory Fit and Equal Opportunity/Diversity: Implications for DEOMI
2013-01-01
than demographic diversity ( Ivancevich & Gilbert, 2000); the goal of equality is to create and manage a heterogeneous mix of abilities, skills, ideas...accepted. Recruiting of minorities and women are not seen as violations of EO laws (Kravitz, 2008; Newman & Lyon , 2009; Pyburn, et al., 2008). Similarly...209-213. REGULATORY FIT AND EQUAL OPPORTUNITY/DIVERSITY 23 Ivancevich , J. M. & Gilbert, J. A. (2000). Diversity management: Time for a new approach
NASA Technical Reports Server (NTRS)
Thareja, R.; Haftka, R. T.
1986-01-01
There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.
van Atteveldt, Nienke; Musacchia, Gabriella; Zion-Golumbic, Elana; Sehatpour, Pejman; Javitt, Daniel C.; Schroeder, Charles
2015-01-01
The brain’s fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts. PMID:26579044
Percolation transport theory and relevance to soil formation, vegetation growth, and productivity
NASA Astrophysics Data System (ADS)
Hunt, A. G.; Ghanbarian, B.
2016-12-01
Scaling laws of percolation theory have been applied to generate the time dependence of vegetation growth rates (both intensively managed and natural) and soil formation rates. The soil depth is thus equal to the solute vertical transport distance, the soil production function, chemical weathering rates, and C and N storage rates are all given by the time derivative of the soil depth. Approximate numerical coefficients based on the maximum flow rates in soils have been proposed, leading to a broad understanding of such processes. What is now required is an accurate understanding of the variability of the coefficients in the scaling relationships. The present abstract focuses on the scaling relationship for solute transport and soil formation. A soil formation rate relates length, x, and time, t, scales, meaning that the missing coefficient must include information about fundamental space and time scales, x0 and t0. x0 is proposed to be a fundamental mineral heterogeneity scale, i.e. a median particle diameter. to is then found from the ratio of x0 and a fundamental flow rate, v0, which is identified with the net infiltration rate. The net infiltration rate is equal to precipitation P less evapotranspiration, ET, plus run-on less run-off. Using this hypothesis, it is possible to predict soil depths and formation rates as functions of time and P - ET, and the formation rate as a function of depth, soil calcic and gypsic horizon depths as functions of P-ET. It is also possible to determine when soils are in equilibrium, and predict relationships of erosion rates and soil formation rates.
Neural mechanisms of motivated forgetting
Anderson, Michael C.; Hanslmayr, Simon
2014-01-01
Not all memories are equally welcome in awareness. People limit the time they spend thinking about unpleasant experiences, a process that begins during encoding, but that continues when cues later remind someone of the memory. Here, we review the emerging behavioural and neuroimaging evidence that suppressing awareness of an unwelcome memory, at encoding or retrieval, is achieved by inhibitory control processes mediated by the lateral prefrontal cortex. These mechanisms interact with neural structures that represent experiences in memory, disrupting traces that support retention. Thus, mechanisms engaged to regulate momentary awareness introduce lasting biases in which experiences remain accessible. We argue that theories of forgetting that neglect the motivated control of awareness omit a powerful force shaping the retention of our past. PMID:24747000
NASA Astrophysics Data System (ADS)
Boche, Holger; Cai, Minglai; Deppe, Christian; Nötzel, Janis
2017-10-01
We analyze arbitrarily varying classical-quantum wiretap channels. These channels are subject to two attacks at the same time: one passive (eavesdropping) and one active (jamming). We elaborate on our previous studies [H. Boche et al., Quantum Inf. Process. 15(11), 4853-4895 (2016) and H. Boche et al., Quantum Inf. Process. 16(1), 1-48 (2016)] by introducing a reduced class of allowable codes that fulfills a more stringent secrecy requirement than earlier definitions. In addition, we prove that non-symmetrizability of the legal link is sufficient for equality of the deterministic and the common randomness assisted secrecy capacities. Finally, we focus on analytic properties of both secrecy capacities: We completely characterize their discontinuity points and their super-activation properties.
Development of a New Paradigm for Analysis of Disdrometric Data
NASA Astrophysics Data System (ADS)
Larsen, Michael L.; Kostinski, Alexander B.
2017-04-01
A number of disdrometers currently on the market are able to characterize hydrometeors on a drop-by-drop basis with arrival timestamps associated with each arriving hydrometeor. This allows an investigator to parse a time series into disjoint intervals that have equal numbers of drops, instead of the traditional subdivision into equal time intervals. Such a "fixed-N" partitioning of the data can provide several advantages over the traditional equal time binning method, especially within the context of quantifying measurement uncertainty (which typically scales with the number of hydrometeors in each sample). An added bonus is the natural elimination of measurements that are devoid of all drops. This analysis method is investigated by utilizing data from a dense array of disdrometers located near Charleston, South Carolina, USA. Implications for the usefulness of this method in future studies are explored.
EQUAL-quant: an international external quality assessment scheme for real-time PCR.
Ramsden, Simon C; Daly, Sarah; Geilenkeuser, Wolf-Jochen; Duncan, Graeme; Hermitte, Fabienne; Marubini, Ettore; Neumaier, Michael; Orlando, Claudio; Palicka, Vladimir; Paradiso, Angelo; Pazzagli, Mario; Pizzamiglio, Sara; Verderio, Paolo
2006-08-01
Quantitative gene expression analysis by real-time PCR is important in several diagnostic areas, such as the detection of minimum residual disease in leukemia and the prognostic assessment of cancer patients. To address quality assurance in this technically challenging area, the European Union (EU) has funded the EQUAL project to develop methodologic external quality assessment (EQA) relevant to diagnostic and research laboratories among the EU member states. We report here the results of the EQUAL-quant program, which assesses standards in the use of TaqMan probes, one of the most widely used assays in the implementation of real-time PCR. The EQUAL-quant reagent set was developed to assess the technical execution of a standard TaqMan assay, including RNA extraction, reverse transcription, and real-time PCR quantification of target DNA copy number. The multidisciplinary EQA scheme included 137 participating laboratories from 29 countries. We demonstrated significant differences in performance among laboratories, with 20% of laboratories reporting at least one result lacking in precision and/or accuracy according to the statistical procedures described. No differences in performance were observed for the >10 different testing platforms used by the study participants. This EQA scheme demonstrated both the requirement and demand for external assessment of technical standards in real-time PCR. The reagent design and the statistical tools developed within this project will provide a benchmark for defining acceptable working standards in this emerging technology.
ERIC Educational Resources Information Center
Ferreira, Frances J.; Kamal, Mostafa Azad
2017-01-01
Sustainable Development Goal (SDG) 5, "achieve gender equality and empower all women and girls", emphasises the need for "providing women and girls with equal access to education, health care, decent work, and representation in political and economic decision-making processes [which] will fuel sustainable economies and benefit…
29 CFR 1614.304 - Contents of petition.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false Contents of petition. 1614.304 Section 1614.304 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION FEDERAL SECTOR EQUAL EMPLOYMENT OPPORTUNITY Related Processes § 1614.304 Contents of petition. (a) Form. Petitions must be written or typed, but may use any format includin...
Commercial Contract Training, Navy Area VOTEC Support Center (AVSC) Guidelines
1975-06-01
either manual or power operated equipment including collators, folders, paper drills, stitchers and cutters, the student will process printed materials...Challenge, model JF or equal). d. Folding machine, size 17-I1/2 x 22-1/2" (Challenge heavy duty model 175 or equal). e. Stitcher , paper (Bostitch model 7
Racial Equality. To Protect These Rights Series.
ERIC Educational Resources Information Center
McDonald, Laughlin
A historical review of racial discrimination against Negroes is the scope of this volume, part of a series of six volumes which explore the basic American rights. These include due process of law, freedom of speech and religious freedom. This volume traces the development of racial equality in the legal system, explores the controversies and…
Detecting entanglement with Jarzynski's equality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hide, Jenny; Vedral, Vlatko; Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543
2010-06-15
We present a method for detecting the entanglement of a state using nonequilibrium processes. A comparison of relative entropies allows us to construct an entanglement witness. The relative entropy can further be related to the quantum Jarzynski equality, allowing nonequilibrium work to be used in entanglement detection. To exemplify our results, we consider two different spin chains.
Del Sorbo, Maria Rosaria; Balzano, Walter; Donato, Michele; Draghici, Sorin
2013-11-01
Differential expression of genes detected with the analysis of high throughput genomic experiments is a commonly used intermediate step for the identification of signaling pathways involved in the response to different biological conditions. The impact analysis was the first approach for the analysis of signaling pathways involved in a certain biological process that was able to take into account not only the magnitude of the expression change of the genes but also the topology of signaling pathways including the type of each interactions between the genes. In the impact analysis, signaling pathways are represented as weighted directed graphs with genes as nodes and the interactions between genes as edges. Edges weights are represented by a β factor, the regulatory efficiency, which is assumed to be equal to 1 in inductive interactions between genes and equal to -1 in repressive interactions. This study presents a similarity analysis between gene expression time series aimed to find correspondences with the regulatory efficiency, i.e. the β factor as found in a widely used pathway database. Here, we focused on correlations among genes directly connected in signaling pathways, assuming that the expression variations of upstream genes impact immediately downstream genes in a short time interval and without significant influences by the interactions with other genes. Time series were processed using three different similarity metrics. The first metric is based on the bit string matching; the second one is a specific application of the Dynamic Time Warping to detect similarities even in presence of stretching and delays; the third one is a quantitative comparative analysis resulting by an evaluation of frequency domain representation of time series: the similarity metric is the correlation between dominant spectral components. These three approaches are tested on real data and pathways, and a comparison is performed using Information Retrieval benchmark tools, indicating the frequency approach as the best similarity metric among the three, for its ability to detect the correlation based on the correspondence of the most significant frequency components. Copyright © 2013. Published by Elsevier Ireland Ltd.
Domain specificity versus expertise: factors influencing distinct processing of faces.
Carmel, David; Bentin, Shlomo
2002-02-01
To explore face specificity in visual processing, we compared the role of task-associated strategies and expertise on the N170 event-related potential (ERP) component elicited by human faces with the ERPs elicited by cars, birds, items of furniture, and ape faces. In Experiment 1, participants performed a car monitoring task and an animacy decision task. In Experiment 2, participants monitored human faces while faces of apes were the distracters. Faces elicited an equally conspicuous N170, significantly larger than the ERPs elicited by non-face categories regardless of whether they were ignored or had an equal status with other categories (Experiment 1), or were the targets (in Experiment 2). In contrast, the negative component elicited by cars during the same time range was larger if they were targets than if they were not. Furthermore, unlike the posterior-temporal distribution of the N170, the negative component elicited by cars and its modulation by task were more conspicuous at occipital sites. Faces of apes elicited an N170 that was similar in amplitude to that elicited by the human face targets, albeit peaking 10 ms later. As our participants were not ape experts, this pattern indicates that the N170 is face-specific, but not specie-specific, i.e. it is elicited by particular face features regardless of expertise. Overall, these results demonstrate the domain specificity of the visual mechanism implicated in processing faces, a mechanism which is not influenced by either task or expertise. The processing of other objects is probably accomplished by a more general visual processor, which is sensitive to strategic manipulations and attention.
A 5 Gb/s CMOS adaptive equalizer for serial link
NASA Astrophysics Data System (ADS)
Wu, Hongbing; Wang, Jingyu; Liu, Hongxia
2018-04-01
A 5 Gb/s adaptive equalizer with a new adaptation scheme is presented here by using 0.13 μm CMOS process. The circuit consists of the combination of equalizer amplifier, limiter amplifier and adaptation loop. The adaptive algorithm exploits both the low frequency gain loop and the equalizer loop to minimize the inter-symbol interference (ISI) for a variety of cable characteristics. In addition, an offset cancellation loop is used to alleviate the offset influence of the signal path. The adaptive equalizer core occupies an area of 0.3567 mm2 and consumes a power consumption of 81.7 mW with 1.8 V power supply. Experiment results demonstrate that the equalizer could compensate for a designed cable loss with 0.23 UI peak-to-peak jitter. Project supported by the National Natural Science Foundation of China (No. 61376099), the Foundation for Fundamental Research of China (No. JSZL2016110B003), and the Major Fundamental Research Program of Shaanxi (No. 2017ZDJC-26).
Aoki, Ryuta; Matsumoto, Madoka; Yomogida, Yukihito; Izuma, Keise; Murayama, Kou; Sugiura, Ayaka; Camerer, Colin F; Adolphs, Ralph; Matsumoto, Kenji
2014-04-30
A distinct aspect of the sense of fairness in humans is that we care not only about equality in material rewards but also about equality in nonmaterial values. One such value is the opportunity to choose freely among many options, often regarded as a fundamental right to economic freedom. In modern developed societies, equal opportunities in work, living, and lifestyle are enforced by antidiscrimination laws. Despite the widespread endorsement of equal opportunity, no studies have explored how people assign value to it. We used functional magnetic resonance imaging to identify the neural substrates for subjective valuation of equality in choice opportunity. Participants performed a two-person choice task in which the number of choices available was varied across trials independently of choice outcomes. By using this procedure, we manipulated the degree of equality in choice opportunity between players and dissociated it from the value of reward outcomes and their equality. We found that activation in the ventromedial prefrontal cortex (vmPFC) tracked the degree to which the number of options between the two players was equal. In contrast, activation in the ventral striatum tracked the number of options available to participants themselves but not the equality between players. Our results demonstrate that the vmPFC, a key brain region previously implicated in the processing of social values, is also involved in valuation of equality in choice opportunity between individuals. These findings may provide valuable insight into the human ability to value equal opportunity, a characteristic long emphasized in politics, economics, and philosophy.
Parton physics on a Euclidean lattice.
Ji, Xiangdong
2013-06-28
I show that the parton physics related to correlations of quarks and gluons on the light cone can be studied through the matrix elements of frame-dependent, equal-time correlators in the large momentum limit. This observation allows practical calculations of parton properties on a Euclidean lattice. As an example, I demonstrate how to recover the leading-twist quark distribution by boosting an equal-time correlator to a large momentum.
26 CFR 1.467-7 - Section 467 recapture and other rules relating to dispositions and modifications.
Code of Federal Regulations, 2012 CFR
2012-04-01
... is equal to the net present value at the time of the transfer (but after giving effect to the... lessee's section 467 loan is equal to the net present value, as of the time the substitute lessee first... that is not a sale or exchange, the section 467 gain is the excess (if any) of the fair market value of...
26 CFR 1.467-7 - Section 467 recapture and other rules relating to dispositions and modifications.
Code of Federal Regulations, 2011 CFR
2011-04-01
... is equal to the net present value at the time of the transfer (but after giving effect to the... lessee's section 467 loan is equal to the net present value, as of the time the substitute lessee first... that is not a sale or exchange, the section 467 gain is the excess (if any) of the fair market value of...
26 CFR 1.467-7 - Section 467 recapture and other rules relating to dispositions and modifications.
Code of Federal Regulations, 2014 CFR
2014-04-01
... is equal to the net present value at the time of the transfer (but after giving effect to the... lessee's section 467 loan is equal to the net present value, as of the time the substitute lessee first... that is not a sale or exchange, the section 467 gain is the excess (if any) of the fair market value of...
26 CFR 1.467-7 - Section 467 recapture and other rules relating to dispositions and modifications.
Code of Federal Regulations, 2013 CFR
2013-04-01
... is equal to the net present value at the time of the transfer (but after giving effect to the... lessee's section 467 loan is equal to the net present value, as of the time the substitute lessee first... that is not a sale or exchange, the section 467 gain is the excess (if any) of the fair market value of...
Rouseff, Daniel; Badiey, Mohsen; Song, Aijun
2009-11-01
The performance of a communications equalizer is quantified in terms of the number of acoustic paths that are treated as usable signal. The analysis uses acoustical and oceanographic data collected off the Hawaiian Island of Kauai. Communication signals were measured on an eight-element vertical array at two different ranges, 1 and 2 km, and processed using an equalizer based on passive time-reversal signal processing. By estimating the Rayleigh parameter, it is shown that all paths reflected by the sea surface at both ranges undergo incoherent scattering. It is demonstrated that some of these incoherently scattered paths are still useful for coherent communications. At range of 1 km, optimal communications performance is achieved when six acoustic paths are retained and all paths with more than one reflection off the sea surface are rejected. Consistent with a model that ignores loss from near-surface bubbles, the performance improves by approximately 1.8 dB when increasing the number of retained paths from four to six. The four-path results though are more stable and require less frequent channel estimation. At range of 2 km, ray refraction is observed and communications performance is optimal when some paths with two sea-surface reflections are retained.
Minimizing the Sum of Completion Times with Resource Dependant Times
NASA Astrophysics Data System (ADS)
Yedidsion, Liron; Shabtay, Dvir; Kaspi, Moshe
2008-10-01
We extend the classical minimization sum of completion times problem to the case where the processing times are controllable by allocating a nonrenewable resource. The quality of a solution is measured by two different criteria. The first criterion is the sum of completion times and the second is the total weighted resource consumption. We consider four different problem variations for treating the two criteria. We prove that this problem is NP-hard for three of the four variations even if all resource consumption weights are equal. However, somewhat surprisingly, the variation of minimizing the integrated objective function is solvable in polynomial time. Although the sum of completion times is arguably the most important scheduling criteria, the complexity of this problem, up to this paper, was an open question for three of the four variations. The results of this research have various implementations, including efficient battery usage on mobile devices such as mobile computer, phones and GPS devices in order to prolong their battery duration.
Method, system and computer-readable media for measuring impedance of an energy storage device
Morrison, John L.; Morrison, William H.; Christophersen, Jon P.; Motloch, Chester G.
2016-01-26
Real-time battery impedance spectrum is acquired using a one-time record. Fast Summation Transformation (FST) is a parallel method of acquiring a real-time battery impedance spectrum using a one-time record that enables battery diagnostics. An excitation current to a battery is a sum of equal amplitude sine waves of frequencies that are octave harmonics spread over a range of interest. A sample frequency is also octave and harmonically related to all frequencies in the sum. A time profile of this sampled signal has a duration that is a few periods of the lowest frequency. A voltage response of the battery, average deleted, is an impedance of the battery in a time domain. Since the excitation frequencies are known and octave and harmonically related, a simple algorithm, FST, processes the time profile by rectifying relative to sine and cosine of each frequency. Another algorithm yields real and imaginary components for each frequency.
Test Equal Bending by Gravity for Space and Time
NASA Astrophysics Data System (ADS)
Sweetser, Douglas
2009-05-01
For the simplest problem of gravity - a static, non-rotating, spherically symmetric source - the solution for spacetime bending around the Sun should be evenly split between time and space. That is true to first order in M/R, and confirmed by experiment. At second order, general relativity predicts different amounts of contribution from time and space without a physical justification. I show an exponential metric is consistent with light bending to first order, measurably different at second order. All terms to all orders show equal contributions from space and time. Beautiful minimalism is Nature's way.
The role of right and left parietal lobes in the conceptual processing of numbers.
Cappelletti, Marinella; Lee, Hwee Ling; Freeman, Elliot D; Price, Cathy J
2010-02-01
Neuropsychological and functional imaging studies have associated the conceptual processing of numbers with bilateral parietal regions (including intraparietal sulcus). However, the processes driving these effects remain unclear because both left and right posterior parietal regions are activated by many other conceptual, perceptual, attention, and response-selection processes. To dissociate parietal activation that is number-selective from parietal activation related to other stimulus or response-selection processes, we used fMRI to compare numbers and object names during exactly the same conceptual and perceptual tasks while factoring out activations correlating with response times. We found that right parietal activation was higher for conceptual decisions on numbers relative to the same tasks on object names, even when response time effects were fully factored out. In contrast, left parietal activation for numbers was equally involved in conceptual processing of object names. We suggest that left parietal activation for numbers reflects a range of processes, including the retrieval of learnt facts that are also involved in conceptual decisions on object names. In contrast, number selectivity in right parietal cortex reflects processes that are more involved in conceptual decisions on numbers than object names. Our results generate a new set of hypotheses that have implications for the design of future behavioral and functional imaging studies of patients with left and right parietal damage.
Wensveen, Paul J; Huijser, Léonie A E; Hoek, Lean; Kastelein, Ronald A
2016-01-01
Loudness perception can be studied based on the assumption that sounds of equal loudness elicit equal reaction time (RT; or "response latency"). We measured the underwater RTs of a harbor porpoise to narrowband frequency-modulated sounds and constructed six equal-latency contours. The contours paralleled the audiogram at low sensation levels (high RTs). At high-sensation levels, contours flattened between 0.5 and 31.5 kHz but dropped substantially (RTs shortened) beyond those frequencies. This study suggests that equal-latency-based frequency weighting can emulate noise perception in porpoises for low and middle frequencies but that the RT-loudness correlation is relatively weak for very high frequencies.
Female equality and suicide in the Indian states.
Mayer, Peter
2003-06-01
Indian suicide rates rose by 76% in the 10 years between 1984 and 1994. In this study of the 16 principal states of India, male and female suicide rates in 1994 were associated with measures of equal education for men and women. Male suicide rates were associated with equal life expectancy for men and women. Equal income for women and men was not associated with suicide rates. Unlike earlier studies, no inverse association was found between equal attainment in education and suicide sex ratios. The Indian findings thus do not conform to patterns found in more developed economies. Given increasing human development in India, it seems probable that suicide rates in that country may increase two to three times over coming decades.
Novel fabrication method of microchannel plates
NASA Astrophysics Data System (ADS)
Yi, Whikun; Jeong, Taewon; Jin, Sunghwan; Yu, SeGi; Lee, Jeonghee; Kim, J. M.
2000-11-01
We have developed a novel microchannel plate (MCP) by introducing new materials and process technologies. The key features of our MCP are summarized as follows: (i) bulk alumina as a substrate, (ii) the channel location defined by a programmed-hole puncher, (iii) thin film deposition by electroless plating and/or sol-gel process, and (iv) an easy fabrication process suitable for mass production and a large-sized MCP. The characteristics of the resulting MCP have been evaluated with a high input current source such as a continuous electron beam from an electron gun and Spindt-type field emitters to obtain information on electron multiplication. In the case of a 0.28 μA incident beam, the output current enhances ˜170 times, which is equal to 1% of the total bias current of the MCP at a given bias voltage of 2600 V. When we insert a MCP between the cathode and the anode of a field emission display panel, the brightness of luminescent light increases 3-4 times by multiplying the emitted electrons through pore arrays of a MCP.
A gossip based information fusion protocol for distributed frequent itemset mining
NASA Astrophysics Data System (ADS)
Sohrabi, Mohammad Karim
2018-07-01
The computational complexity, huge memory space requirement, and time-consuming nature of frequent pattern mining process are the most important motivations for distribution and parallelization of this mining process. On the other hand, the emergence of distributed computational and operational environments, which causes the production and maintenance of data on different distributed data sources, makes the parallelization and distribution of the knowledge discovery process inevitable. In this paper, a gossip based distributed itemset mining (GDIM) algorithm is proposed to extract frequent itemsets, which are special types of frequent patterns, in a wireless sensor network environment. In this algorithm, local frequent itemsets of each sensor are extracted using a bit-wise horizontal approach (LHPM) from the nodes which are clustered using a leach-based protocol. Heads of clusters exploit a gossip based protocol in order to communicate each other to find the patterns which their global support is equal to or more than the specified support threshold. Experimental results show that the proposed algorithm outperforms the best existing gossip based algorithm in term of execution time.
Automated matching software for clinical trials eligibility: measuring efficiency and flexibility.
Penberthy, Lynne; Brown, Richard; Puma, Federico; Dahman, Bassam
2010-05-01
Clinical trials (CT) serve as the media that translates clinical research into standards of care. Low or slow recruitment leads to delays in delivery of new therapies to the public. Determination of eligibility in all patients is one of the most important factors to assure unbiased results from the clinical trials process and represents the first step in addressing the issue of under representation and equal access to clinical trials. This is a pilot project evaluating the efficiency, flexibility, and generalizibility of an automated clinical trials eligibility screening tool across 5 different clinical trials and clinical trial scenarios. There was a substantial total savings during the study period in research staff time spent in evaluating patients for eligibility ranging from 165h to 1329h. There was a marked enhancement in efficiency with the automated system for all but one study in the pilot. The ratio of mean staff time required per eligible patient identified ranged from 0.8 to 19.4 for the manual versus the automated process. The results of this study demonstrate that automation offers an opportunity to reduce the burden of the manual processes required for CT eligibility screening and to assure that all patients have an opportunity to be evaluated for participation in clinical trials as appropriate. The automated process greatly reduces the time spent on eligibility screening compared with the traditional manual process by effectively transferring the load of the eligibility assessment process to the computer. Copyright (c) 2010 Elsevier Inc. All rights reserved.
gallon equivalent of natural gas at the time fuel is dispensed or delivered into the tank of a motor vehicle. A gasoline gallon equivalent is equal to 5.66 lbs. of CNG and a diesel gallon equivalent is equal
Zhao, Jing; Zong, Haili
2018-01-01
In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.
Terahertz generation by difference frequency generation from a compact optical parametric oscillator
NASA Astrophysics Data System (ADS)
Li, Zhongyang; Wang, Silei; Wang, Mengtao; Wang, Weishu
2017-11-01
Terahertz (THz) generation by difference frequency generation (DFG) processes with dual idler waves is theoretically analyzed. The dual idler waves are generated by a compact optical parametric oscillator (OPO) with periodically poled lithium niobate (PPLN). The phase-matching conditions in a same PPLN for the optical parametric oscillation generating signal and idler waves and for the DFG generating THz waves can be simultaneously satisfied by selecting the poling period of PPLN. Moreover, 3-order cascaded DFG processes generating THz waves can be realized in the same PPLN. To take an example of 8.341 THz which locates in the vicinity of polariton resonances, THz intensities and quantum conversion efficiencies are calculated. Compared with non-cascaded DFG processes, THz intensities of 8.341 THz in 3-order cascaded DFG processes increase to 2.57 times. When the pump intensity equals to 20 MW/mm2, the quantum conversion efficiency of 106% in 3-order cascaded DFG processes can be realized, which exceeds the Manley-Rowe limit.
Zeng, Tao; Mao, Wen; Lu, Qing
2016-05-25
Scalp-recorded event-related potentials are known to be sensitive to particular aspects of sentence processing. The N400 component is widely recognized as an effect closely related to lexical-semantic processing. The absence of an N400 effect in participants performing tasks in Indo-European languages has been considered evidence that failed syntactic category processing appears to block lexical-semantic integration and that syntactic structure building is a prerequisite of semantic analysis. An event-related potential experiment was designed to investigate whether such syntactic primacy can be considered to apply equally to Chinese sentence processing. Besides correct middles, sentences with either single semantic or single syntactic violation as well as double syntactic and semantic anomaly were used in the present research. Results showed that both purely semantic and combined violation induced a broad negativity in the time window 300-500 ms, indicating the independence of lexical-semantic integration. These findings provided solid evidence that lexical-semantic parsing plays a crucial role in Chinese sentence comprehension.
An application of viola jones method for face recognition for absence process efficiency
NASA Astrophysics Data System (ADS)
Rizki Damanik, Rudolfo; Sitanggang, Delima; Pasaribu, Hendra; Siagian, Hendrik; Gulo, Frisman
2018-04-01
Absence was a list of documents that the company used to record the attendance time of each employee. The most common problem in a fingerprint machine is the identification of a slow sensor or a sensor not recognizing a finger. The employees late to work because they get difficulties at fingerprint system, they need about 3 – 5 minutes to absence when the condition of finger is wet or not fit. To overcome this problem, this research tried to utilize facial recognition for attendance process. The method used for facial recognition was Viola Jones. Through the processing phase of the RGB face image was converted into a histogram equalization face image for the next stage of recognition. The result of this research was the absence process could be done less than 1 second with a maximum slope of ± 700 and a distance of 20-200 cm. After implement facial recognition the process of absence is more efficient, just take less 1 minute to absence.
Pandey, Anil Kumar; Sharma, Param Dev; Dheer, Pankaj; Parida, Girish Kumar; Goyal, Harish; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
Purpose of the Study: 99mTechnetium-methylene diphosphonate (99mTc-MDP) bone scan images have limited number of counts per pixel, and hence, they have inferior image quality compared to X-rays. Theoretically, global histogram equalization (GHE) technique can improve the contrast of a given image though practical benefits of doing so have only limited acceptance. In this study, we have investigated the effect of GHE technique for 99mTc-MDP-bone scan images. Materials and Methods: A set of 89 low contrast 99mTc-MDP whole-body bone scan images were included in this study. These images were acquired with parallel hole collimation on Symbia E gamma camera. The images were then processed with histogram equalization technique. The image quality of input and processed images were reviewed by two nuclear medicine physicians on a 5-point scale where score of 1 is for very poor and 5 is for the best image quality. A statistical test was applied to find the significance of difference between the mean scores assigned to input and processed images. Results: This technique improves the contrast of the images; however, oversaturation was noticed in the processed images. Student's t-test was applied, and a statistically significant difference in the input and processed image quality was found at P < 0.001 (with α = 0.05). However, further improvement in image quality is needed as per requirements of nuclear medicine physicians. Conclusion: GHE techniques can be used on low contrast bone scan images. In some of the cases, a histogram equalization technique in combination with some other postprocessing technique is useful. PMID:29142344
Video enhancement workbench: an operational real-time video image processing system
NASA Astrophysics Data System (ADS)
Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.
1993-01-01
Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.
System for and method of freezing biological tissue
NASA Technical Reports Server (NTRS)
Williams, T. E.; Cygnarowicz, T. A. (Inventor)
1978-01-01
Biological tissue is frozen while a polyethylene bag placed in abutting relationship against opposed walls of a pair of heaters. The bag and tissue are cooled with refrigerating gas at a time programmed rate at least equal to the maximum cooling rate needed at any time during the freezing process. The temperature of the bag, and hence of the tissue, is compared with a time programmed desired value for the tissue temperature to derive an error indication. The heater is activated in response to the error indication so that the temperature of the tissue follows the desired value for the time programmed tissue temperature. The tissue is heated to compensate for excessive cooling of the tissue as a result of the cooling by the refrigerating gas. In response to the error signal, the heater is deactivated while the latent heat of fusion is being removed from the tissue while the tissue is changing phase from liquid to solid.
NASA Astrophysics Data System (ADS)
Yuvchenko, S. A.; Tzyipin, D. V.; Isaeva, A. A.; Isaeva, E. A.; Ushakova, O. V.; Macheev, M. S.; Zimnyakov, D. A.
2018-04-01
The temporal evolution of the metastable and unstable foams had been studied. Diffusion wave spectroscopy was chosen as the diagnostic method, with calculation of the correlation time of the fluctuations in the intensity of the probing radiation. It was established that the correlation time increases with the time according to the power law with different parameters, depending on the type of the evolution and was found to be equal to 0.5 for the case of the metastable and to 2,52 for the unstable foam. It was also determined that the behaviour of the correlation time agrees well with the evolution of the characteristic dimensions of the scatterers in the form of bubbles in the medium, which can be used for contactless monitoring of the foaming processes in the production of the foam-like materials for various applications, for example, in the synthesis of the biocompatible polymer matrices - scaffolds.
Single-agent parallel window search
NASA Technical Reports Server (NTRS)
Powley, Curt; Korf, Richard E.
1991-01-01
Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.
Issues in Strategic Thought: From Clausewitz to Al-Qaida
2012-12-01
form of warfare. All things being equal , including the objective point, the offense would theoretically have the advantage because it could choose the...Clausewitz imagined two battling commanders whose interests “are opposed in equal measure to each other” as a way of conceptualizing pure polarity.63 In...formula apply equally to past history and to present politics. The social movements of all times have played around essentially the same physical
Neighboring extremals of dynamic optimization problems with path equality constraints
NASA Technical Reports Server (NTRS)
Lee, A. Y.
1988-01-01
Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.
A componential model of human interaction with graphs: 1. Linear regression modeling
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert
1994-01-01
Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.
NASA Astrophysics Data System (ADS)
Hecksher, Tina; Olsen, Niels Boye; Dyre, Jeppe C.
2017-04-01
This paper presents data for supercooled squalane's frequency-dependent shear modulus covering frequencies from 10 mHz to 30 kHz and temperatures from 168 K to 190 K; measurements are also reported for the glass phase down to 146 K. The data reveal a strong mechanical beta process. A model is proposed for the shear response of the metastable equilibrium liquid phase of supercooled liquids. The model is an electrical equivalent-circuit characterized by additivity of the dynamic shear compliances of the alpha and beta processes. The nontrivial parts of the alpha and beta processes are each represented by a "Cole-Cole retardation element" defined as a series connection of a capacitor and a constant-phase element, resulting in the Cole-Cole compliance function well-known from dielectrics. The model, which assumes that the high-frequency decay of the alpha shear compliance loss varies with the angular frequency as ω-1 /2, has seven parameters. Assuming time-temperature superposition for the alpha and beta processes separately, the number of parameters varying with temperature is reduced to four. The model provides a better fit to the data than an equally parametrized Havriliak-Negami type model. From the temperature dependence of the best-fit model parameters, the following conclusions are drawn: (1) the alpha relaxation time conforms to the shoving model; (2) the beta relaxation loss-peak frequency is almost temperature independent; (3) the alpha compliance magnitude, which in the model equals the inverse of the instantaneous shear modulus, is only weakly temperature dependent; (4) the beta compliance magnitude decreases by a factor of three upon cooling in the temperature range studied. The final part of the paper briefly presents measurements of the dynamic adiabatic bulk modulus covering frequencies from 10 mHz to 10 kHz in the temperature range from 172 K to 200 K. The data are qualitatively similar to the shear modulus data by having a significant beta process. A single-order-parameter framework is suggested to rationalize these similarities.
Hecksher, Tina; Olsen, Niels Boye; Dyre, Jeppe C
2017-04-21
This paper presents data for supercooled squalane's frequency-dependent shear modulus covering frequencies from 10 mHz to 30 kHz and temperatures from 168 K to 190 K; measurements are also reported for the glass phase down to 146 K. The data reveal a strong mechanical beta process. A model is proposed for the shear response of the metastable equilibrium liquid phase of supercooled liquids. The model is an electrical equivalent-circuit characterized by additivity of the dynamic shear compliances of the alpha and beta processes. The nontrivial parts of the alpha and beta processes are each represented by a "Cole-Cole retardation element" defined as a series connection of a capacitor and a constant-phase element, resulting in the Cole-Cole compliance function well-known from dielectrics. The model, which assumes that the high-frequency decay of the alpha shear compliance loss varies with the angular frequency as ω -1/2 , has seven parameters. Assuming time-temperature superposition for the alpha and beta processes separately, the number of parameters varying with temperature is reduced to four. The model provides a better fit to the data than an equally parametrized Havriliak-Negami type model. From the temperature dependence of the best-fit model parameters, the following conclusions are drawn: (1) the alpha relaxation time conforms to the shoving model; (2) the beta relaxation loss-peak frequency is almost temperature independent; (3) the alpha compliance magnitude, which in the model equals the inverse of the instantaneous shear modulus, is only weakly temperature dependent; (4) the beta compliance magnitude decreases by a factor of three upon cooling in the temperature range studied. The final part of the paper briefly presents measurements of the dynamic adiabatic bulk modulus covering frequencies from 10 mHz to 10 kHz in the temperature range from 172 K to 200 K. The data are qualitatively similar to the shear modulus data by having a significant beta process. A single-order-parameter framework is suggested to rationalize these similarities.
Chen, Yen-Nien; Lee, Pei-Yuan; Chang, Chih-Wei; Ho, Yi-Hung; Peng, Yao-Te; Chang, Chih-Han; Li, Chun-Ting
2017-03-01
This study numerically investigated the deformation of titanium elastic nails prebent at various degrees during implantation into the intramedullary canal of fractured bones and the mechanism by which this prebending influenced the stability of the fractured bone. Three degrees of prebending the implanted portions of the nails were used: equal to, two times, and three times the diameter of the intramedullary canal. Furthermore, a simulated diaphyseal fracture with a 5-mm gap was created in the middle shaft portion of the bone fixed with two elastic nails in a double C-type configuration. End caps were simulated using a constraint equation. To confirm that the simulation process is able to present the mechanical response of the nail inside the intramedullary, an experiment was conducted by using sawbone for validation. The results indicated that increasing the degrees of nail prebending facilitated straightening the nails against the inner aspect of canal after implantation, with increase in stability under torsion. Furthermore, reducing nail prebending caused a larger portion of the nails to move closer to the loading site and center of bone after implantation; the use of end caps prevented the nail tips from collapsing and increased axial stability. End cap use was critical for preventing the nail tips from collapsing and for increasing the stability of the nails prebent at a degree equal to the diameter of the canal with insufficient frictional force between the nail and canal. Therefore, titanium elastic nail prebending in a double C-type configuration with a degree three times the diameter of the canal represents a superior solution for treating transverse fractures without a gap, whereas that with a degree equal to the diameter of the intramedullary canal and combined with end cap use represents an advanced solution for treating comminuted fractures in a diaphyseal long bone fracture.
Diagnosis of Middle Atmosphere Climate Sensitivity by the Climate Feedback Response Analysis Method
NASA Technical Reports Server (NTRS)
Zhu, Xun; Yee, Jeng-Hwa; Cai, Ming; Swartz, William H.; Coy, Lawrence; Aquila, Valentina; Talaat, Elsayed R.
2014-01-01
We present a new method to diagnose the middle atmosphere climate sensitivity by extending the Climate Feedback-Response Analysis Method (CFRAM) for the coupled atmosphere-surface system to the middle atmosphere. The Middle atmosphere CFRAM (MCFRAM) is built on the atmospheric energy equation per unit mass with radiative heating and cooling rates as its major thermal energy sources. MCFRAM preserves the CFRAM unique feature of an additive property for which the sum of all partial temperature changes due to variations in external forcing and feedback processes equals the observed temperature change. In addition, MCFRAM establishes a physical relationship of radiative damping between the energy perturbations associated with various feedback processes and temperature perturbations associated with thermal responses. MCFRAM is applied to both measurements and model output fields to diagnose the middle atmosphere climate sensitivity. It is found that the largest component of the middle atmosphere temperature response to the 11-year solar cycle (solar maximum vs. solar minimum) is directly from the partial temperature change due to the variation of the input solar flux. Increasing CO2 always cools the middle atmosphere with time whereas partial temperature change due to O3 variation could be either positive or negative. The partial temperature changes due to different feedbacks show distinctly different spatial patterns. The thermally driven globally averaged partial temperature change due to all radiative processes is approximately equal to the observed temperature change, ranging from 0.5 K near 70 km from the near solar maximum to the solar minimum.
Self-similarity in incompressible Navier-Stokes equations.
Ercan, Ali; Kavvas, M Levent
2015-12-01
The self-similarity conditions of the 3-dimensional (3D) incompressible Navier-Stokes equations are obtained by utilizing one-parameter Lie group of point scaling transformations. It is found that the scaling exponents of length dimensions in i = 1, 2, 3 coordinates in 3-dimensions are not arbitrary but equal for the self-similarity of 3D incompressible Navier-Stokes equations. It is also shown that the self-similarity in this particular flow process can be achieved in different time and space scales when the viscosity of the fluid is also scaled in addition to other flow variables. In other words, the self-similarity of Navier-Stokes equations is achievable under different fluid environments in the same or different gravity conditions. Self-similarity criteria due to initial and boundary conditions are also presented. Utilizing the proposed self-similarity conditions of the 3D hydrodynamic flow process, the value of a flow variable at a specified time and space can be scaled to a corresponding value in a self-similar domain at the corresponding time and space.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Guidelines. 42.306 Section 42.306... PROCEDURES Equal Employment Opportunity Program Guidelines § 42.306 Guidelines. (a) Recipient agencies are... guidelines under their equal employment opportunity program which will correct, in a timely manner, any...
NASA Technical Reports Server (NTRS)
Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.;
2014-01-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.
NASA Astrophysics Data System (ADS)
Martin-Fernandez, M. L.; Tobin, M. J.; Clarke, D. T.; Gregory, C. M.; Jones, G. R.
1998-02-01
We describe an instrument designed to monitor molecular motions in multiphasic, weakly fluorescent microscopic systems. It combines synchrotron radiation, a low irradiance polarized microfluorimeter, and an automated, multiframing, single-photon-counting data acquisition system, and is capable of continually accumulating subnanosecond resolved anisotropy decays with a real-time resolution of about 60 s. The instrument has initially been built to monitor ligand-receptor interactions in living cells, but can equally be applied to the continual measurement of any dynamic process involving fluorescent molecules, that occurs over a time scale from a few minutes to several hours. As a particularly demanding demonstration of its capabilities, we have used it to monitor the environmental constraints imposed on the peptide hormone epidermal growth factor during its endocytosis and recycling to the cell surface in live cells.
An Example of Economic Value in Rapid Prototyping
NASA Technical Reports Server (NTRS)
Hauer, R. L.; Braunscheidel, E. P.
2001-01-01
Today's modern machining projects are composed more and more of complicated and intricate structure due to a variety of reasons including the ability to computer model complex surfaces and forms. The cost of producing these forms can be extremely high not only in dollars but in time to complete. Changes are even more difficult to incorporate. The subject blade shown is an excellent example. Its complex form would have required hundreds of hours in fabrication for just a simple prototype. The procurement would have taken in the neighborhood of six weeks to complete. The actual fabrication would have been an equal amount of time to complete. An alternative to this process would have been a wood model. Although cheaper than a metal fabrication, it would be extremely time intensive and require in the neighborhood of a month to produce in-house.
Evaluating MC&A effectiveness to verify the presence of nuclear materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, P. G.; Morzinski, J. A.; Ostenak, Carl A.
Traditional materials accounting is focused exclusively on the material balance area (MBA), and involves periodically closing a material balance based on accountability measurements conducted during a physical inventory. In contrast, the physical inventory for Los Alamos National Laboratory's near-real-time accounting system is established around processes and looks more like an item inventory. That is, the intent is not to measure material for accounting purposes, since materials have already been measured in the normal course of daily operations. A given unit process operates many times over the course of a material balance period. The product of a given unit process maymore » move for processing within another unit process in the same MBA or may be transferred out of the MBA. Since few materials are unmeasured the physical inventory for a near-real-time process area looks more like an item inventory. Thus, the intent of the physical inventory is to locate the materials on the books and verify information about the materials contained in the books. Closing a materials balance for such an area is a matter of summing all the individual mass balances for the batches processed by all unit processes in the MBA. Additionally, performance parameters are established to measure the program's effectiveness. Program effectiveness for verifying the presence of nuclear material is required to be equal to or greater than a prescribed performance level, process measurements must be within established precision and accuracy values, physical inventory results meet or exceed performance requirements, and inventory differences are less than a target/goal quantity. This approach exceeds DOE established accounting and physical inventory program requirements. Hence, LANL is committed to this approach and to seeking opportunities for further improvement through integrated technologies. This paper will provide a detailed description of this evaluation process.« less
Uneven transitions: Period- and cohort-related changes in gender attitudes in China, 1995-2007.
Shu, Xiaoling; Zhu, Yifei
2012-09-01
This paper analyzes temporal variations in two gender attitudes in China: beliefs about gender equality and perspectives on women's combined work and family roles. It uses the most currently available population series from the 1995, 2001 and 2007 World Value Surveys of 4500 respondents and a series of multilevel cross-classified models to properly estimate period and cohort effects. Attitudes toward women's dual roles manifest neither period nor cohort effects; the population displays a universal high level of acceptance of women's paid employment. Orientations toward gender equality manifest both cohort and period effects: members of the youngest cohort of both sexes hold the most liberal attitudes; the positive effect of college education has increased over time. Attitude toward gender equality in China displays neither a shift toward conservatism nor an over-time trend toward egalitarianism in 1995-2007, a time of rapid economic growth. Copyright © 2012 Elsevier Inc. All rights reserved.
Method for preparing ceramic composite
Alexander, Kathleen B.; Tiegs, Terry N.; Becher, Paul F.; Waters, Shirley B.
1996-01-01
A process for preparing ceramic composite comprising blending TiC particulates, Al.sub.2 O.sub.3 particulates and nickle aluminide and consolidating the mixture at a temperature and pressure sufficient to produce a densified ceramic composite having fracture toughness equal to or greater than 7 MPa m.sup.1/2, a hardness equal to or greater than 18 GPa.
USDA-ARS?s Scientific Manuscript database
In addition to the carinate metasternum, in Cromata the labrum equals the length of the first labial segment, whereas in Pseudocromata the labrum equals the length of the first two labial segments. The males of Pseudocromata do not have the dorsal process extending from the 7th abdominal tergite fou...
ERIC Educational Resources Information Center
Osler, Audrey
2015-01-01
This paper focuses on the role of narrative in enabling educational processes to support justice and equality in multicultural societies. It draws on Bhabha's (2003) concept "the right to narrate", arguing that conceptions of multicultural education which focus exclusively on the nation are insufficient in a globalized and interdependent…
ERIC Educational Resources Information Center
Petrovskiy, Igor V.; Agapova, Elena N.
2016-01-01
The aim of the research is to develop the policy and strategy recommendations to increase the quality of higher education in Russian Federation. The study examines the significance of equal educational opportunities and the influence of this factor on the educational systems of developing countries. Transformational processes in the domain of…
Possible Unconscious Bias in Recruitment and Promotion and the Need to Promote Equality
ERIC Educational Resources Information Center
Beattie, Geoffrey; Johnson, Patrick
2012-01-01
Legislation to outlaw discrimination has existed for over forty years. The Equality Act (2010) states that it is unlawful for an employer to discriminate against a candidate for a job because of their age, disability, race, belief, sexual orientation or gender in any part of the recruitment process--in job descriptions, person specifications,…
Let's Get Parents Ready for Their Initial IEP Meeting
ERIC Educational Resources Information Center
Hammond, Helen; Ingalls, Lawrence
2017-01-01
Parental participation in the initial Individual Education Program (IEP) meeting is a critical component of the process. Even though parents have rights to be equally involved in making decisions at the IEP meetings, frequently parents aren't prepared to be equal members on the team with school personnel. This study focused on a preparation…
Psychophysics of time perception and intertemporal choice models
NASA Astrophysics Data System (ADS)
Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.
2008-03-01
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.
Gooch, Cynthia M.; Stern, Yaakov; Rakitin, Brian C.
2009-01-01
The effect of aging on interval timing was examined using a choice time production task, which required participants to choose a key response based on the location of the stimulus, but to delay responding until after a learned time interval. Experiment 1 varied attentional demands of the response choice portion of the task by varying difficulty of stimulus-response mapping. Choice difficulty affected temporal accuracy equally in both age groups, but older participants’ response latencies were more variable under more difficult response choice conditions. Experiment 2 tested the contribution of long-term memory to differences in choice time production between age groups over 3 days of testing. Direction of errors in time production between the two age groups diverged over the 3 sessions, but variability did not differ. Results from each experiment separately show age-related changes to attention and memory in temporal processing using different measures and manipulations in the same task. PMID:19132578
SU-F-T-345: Quasi-Dead Beams: Clinical Relevance and Implications for Automatic Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, R; Veltchev, I; Lin, T
Purpose: Beam direction selection for fixed-beam IMRT planning is typically a manual process. Severe dose-volume limits on critical structures in the thorax often result in atypical selection of beam directions as compared to other body sites. This work demonstrates the potential consequences as well as clinical relevance. Methods: 21 thoracic cases treated with 5–7 beam directions, 6 cases including non-coplanar arrangements, with fractional doses of 150–411cGy were analyzed. Endpoints included per-beam modulation scaling factor (MSF), variation from equal weighting, and delivery QA passing rate. Results: During analysis of patient-specific delivery QA a sub-standard passing rate was found for a singlemore » 5-field plan (90.48% of pixels evaluated passing 3% dose, 3mm DTA). During investigation it was found that a single beam demonstrated a MSF of 34.7 and contributed only 2.7% to the mean dose of the target. In addition, the variation from equal weighting for this beam was 17.3% absolute resulting in another beam with a MSF of 4.6 contributing 41.9% to the mean dose to the target; a variation of 21.9% from equal weighting. The average MSF for the remaining 20 cases was 4.0 (SD 1.8) with an average absolute deviation of 2.8% from equal weighting (SD 3.1%). Conclusion: Optimization in commercial treatment planning systems typically results in relatively equally weighted beams. Extreme variation from this can result in excessively high MSFs (very small segments) and potential decreases in agreement between planned and delivered dose distributions. In addition, the resultant beam may contribute minimal dose to the target (quasi-dead beam); a byproduct being increased treatment time and associated localization uncertainties. Potential ramifications exist for automatic planning algorithms should they allow for user-defined beam directions. Additionally, these quasi-dead beams may be embedded in the libraries for model-based systems potentially resulting in inefficient and less accurate deliveries.« less
Are Nonadjacent Collocations Processed Faster?
ERIC Educational Resources Information Center
Vilkaite, Laura
2016-01-01
Numerous studies have shown processing advantages for collocations, but they only investigated processing of adjacent collocations (e.g., "provide information"). However, in naturally occurring language, nonadjacent collocations ("provide" some of the "information") are equally, if not more frequent. This raises the…
The architecture of blind equalizer for MIMO free space optical communication system
NASA Astrophysics Data System (ADS)
Li, Hongwei; Huang, Yongmei
2016-10-01
The free space optical (FSO) communication system has attracted many researchers from different countries, owning to its advantages such as high security, high speed and anti-interference. Among all kinds of the channels of the FSO communication system, the atmosphere channel is very difficult to deal with for two typical disadvantages at least. The one is the scintillation of the optical carrier intensity caused by the atmosphere turbulence and the other is the multipath effect by the optical scattering. A lot of studies have shown that the MIMO (Multiple Input Multiple Output) technology can overcome the scintillation of the optical carrier through the atmosphere effectively. So the background of this paper is a MIMO system which includes multiple optical transmitting antennas and multiple optical receiving antennas. A number of particles such as hazes, water droplets and aerosols exit in the atmosphere widely. When optical carrier meets these particles, the scattering phenomenon is inevitable, which leads to the multipath effect. As a result, a optical pulse transmitted by the optical transmitter becomes wider, to some extent, when it gets to the optical receiver due to the multipath effect. If the information transmission rate is quite low, there is less relationship between the multipath effect and the bit error rate (BER) of the communication system. Once the information transmission rate increases to a high level, the multipath effect will produce the problem called inter symbol inference (ISI) seriously and the bit error rate will increase severely. In order to take the advantage of the FSO communication system, the inter symbol inference problem must be solved. So it is necessary to use the channel equalization technology. This paper aims at deciding a equalizer and designing suitable equalization algorithm for a MIMO free space optical communication system to overcome the serious problem of bit error rate. The reliability and the efficiency of communication are two important indexes. For a MIMO communication system, there are two typical equalization methods. The first method, every receiving antenna has an independent equalizer without the information derived from the other receiving antennas. The second, the information derived from all of the receiving antennas mixes with each other, according to some definite rules, which is called space-time equalization. The former is discussed in this paper. The equalization algorithm concludes training mode and non training mode. The training mode needs training codes transmitted by the transmitter during the whole communication process and this mode reduces the communication efficiency more or less. In order to improve the communication efficiency, the blind equalization algorithm, a non training mode, is used to solve the parameter of the equalizer. In this paper, firstly, the atmosphere channel is described focusing on the scintillation and multipath effect of the optical carrier. Then, the structure of a equalizer of MIMO free space optical communication system is introduced. In the next part of this paper, the principle of the blind equalization algorithm is introduced. In addition, the simulation results are showed. In the end of this paper, the conclusions and the future work are discussed.
Zink, Adriana Gledys; Molina, Eder Cassola; Diniz, Michele Baffi; Santos, Maria Teresa Botti Rodrigues; Guaré, Renata Oliveira
2018-01-01
The purpose of this study was to develop and evaluate an application (app) facilitating patient-professional communication among individuals with autism spectrum disorder (ASD) and compare it with the Picture Exchange Communication System (PECS). Forty nine- to 15-year-olds were randomly divided into two groups: G1 (app; N equals 20) and G2 (PECS; N equals 20). Initially, the visual contact timing of the groups was measured. Pictures of a room, ground, chair, dentist, mouth, low-speed handpiece, and air-water syringe were presented to both groups. Each picture was shown up to three times per appointment to evaluate whether or not the child accepted the procedure. After dental prophylaxis, caries experience was recorded. The prevalence of dental caries was 37.5 percent. Differences in the number of attempts required for each picture to acquire the skill proposed were found between the groups (Mann-Whitney, P<0.05). A significant difference in the median number of attempts (G1 equals 9.5 and G2 equals 15) and appointments (G1 equals three and G2 equals five) was observed (Mann-Whitney, P<0.05). The app was more effective than the Picture Exchange Communication System for dentist-patient communication, decreasing the number of appointments required for preventive dental care and clinical examinations.
On-Board Real-Time Optimization Control for Turbo-Fan Engine Life Extending
NASA Astrophysics Data System (ADS)
Zheng, Qiangang; Zhang, Haibo; Miao, Lizhen; Sun, Fengyong
2017-11-01
A real-time optimization control method is proposed to extend turbo-fan engine service life. This real-time optimization control is based on an on-board engine mode, which is devised by a MRR-LSSVR (multi-input multi-output recursive reduced least squares support vector regression method). To solve the optimization problem, a FSQP (feasible sequential quadratic programming) algorithm is utilized. The thermal mechanical fatigue is taken into account during the optimization process. Furthermore, to describe the engine life decaying, a thermal mechanical fatigue model of engine acceleration process is established. The optimization objective function not only contains the sub-item which can get fast response of the engine, but also concludes the sub-item of the total mechanical strain range which has positive relationship to engine fatigue life. Finally, the simulations of the conventional optimization control which just consider engine acceleration performance or the proposed optimization method have been conducted. The simulations demonstrate that the time of the two control methods from idle to 99.5 % of the maximum power are equal. However, the engine life using the proposed optimization method could be surprisingly increased by 36.17 % compared with that using conventional optimization control.
Modified stochastic fragmentation of an interval as an ageing process
NASA Astrophysics Data System (ADS)
Fortin, Jean-Yves
2018-02-01
We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.
Long-term persistence of solar activity. [Abstract only
NASA Technical Reports Server (NTRS)
Ruzmaikin, Alexander; Feynman, Joan; Robinson, Paul
1994-01-01
The solar irradiance has been found to change by 0.1% over the recent solar cycle. A change of irradiance of about 0.5% is required to effect the Earth's climate. How frequently can a variation of this size be expected? We examine the question of the persistence of non-periodic variations in solar activity. The Huerst exponent, which characterizes the persistence of a time series (Mandelbrot and Wallis, 1969), is evaluated for the series of C-14 data for the time interval from about 6000 BC to 1950 AD (Stuiver and Pearson, 1986). We find a constant Huerst exponent, suggesting that solar activity in the frequency range of from 100 to 3000 years includes an important continuum component in addition to the well-known periodic variations. The value we calculate, H approximately equal to 0.8, is significantly larger than the value of 0.5 that would correspond to variations produced by a white-noise process. This value is in good agreement with the results for the monthly sunspot data reported elsewhere, indicating that the physics that produces the continuum is a correlated random process (Ruzmaikin et al., 1992), and that is is the same type of process over a wide range of time interval lengths. We conclude that the time period over which an irradiance change of 0.5% can be expected to occur is significantly shorter than that which would be expected for variations produced by a white-noise process.
Bai, Neng; Xia, Cen; Li, Guifang
2012-10-08
We propose and experimentally demonstrate single-carrier adaptive frequency-domain equalization (SC-FDE) to mitigate multipath interference (MPI) for the transmission of the fundamental mode in a few-mode fiber. The FDE approach reduces computational complexity significantly compared to the time-domain equalization (TDE) approach while maintaining the same performance. Both FDE and TDE methods are evaluated by simulating long-haul fundamental-mode transmission using a few-mode fiber. For the fundamental mode operation, the required tap length of the equalizer depends on the differential mode group delay (DMGD) of a single span rather than DMGD of the entire link.
Feature and contrast enhancement of mammographic image based on multiscale analysis and morphology.
Wu, Shibin; Yu, Shaode; Yang, Yuhan; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII).
Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology
Wu, Shibin; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII). PMID:24416072
Advanced linear and nonlinear compensations for 16QAM SC-400G unrepeatered transmission system
NASA Astrophysics Data System (ADS)
Zhang, Junwen; Yu, Jianjun; Chien, Hung-Chang
2018-02-01
Digital signal processing (DSP) with both linear equalization and nonlinear compensations are studied in this paper for the single-carrier 400G system based on 65-GBaud 16-quadrature amplitude modulation (QAM) signals. The 16-QAM signals are generated and pre-processed with pre-equalization (Pre-EQ) and Look-up-Table (LUT) based pre-distortion (Pre-DT) at the transmitter (Tx)-side. The implementation principle of training-based equalization and pre-distortion are presented here in this paper with experimental studies. At the receiver (Rx)-side, fiber-nonlinearity compensation based on digital backward propagation (DBP) are also utilized to further improve the transmission performances. With joint LUT-based Pre-DT and DBP-based post-compensation to mitigate the opto-electronic components and fiber nonlinearity impairments, we demonstrate the unrepeatered transmission of 1.6Tb/s based on 4-lane 400G single-carrier PDM-16QAM over 205-km SSMF without distributed amplifier.
NASA Astrophysics Data System (ADS)
Maghrabi, Mahmoud M. T.; Kumar, Shiva; Bakr, Mohamed H.
2018-02-01
This work introduces a powerful digital nonlinear feed-forward equalizer (NFFE), exploiting multilayer artificial neural network (ANN). It mitigates impairments of optical communication systems arising due to the nonlinearity introduced by direct photo-detection. In a direct detection system, the detection process is nonlinear due to the fact that the photo-current is proportional to the absolute square of the electric field intensity. The proposed equalizer provides the most efficient computational cost with high equalization performance. Its performance is comparable to the benchmark compensation performance achieved by maximum-likelihood sequence estimator. The equalizer trains an ANN to act as a nonlinear filter whose impulse response removes the intersymbol interference (ISI) distortions of the optical channel. Owing to the proposed extensive training of the equalizer, it achieves the ultimate performance limit of any feed-forward equalizer (FFE). The performance and efficiency of the equalizer is investigated by applying it to various practical short-reach fiber optic communication system scenarios. These scenarios are extracted from practical metro/media access networks and data center applications. The obtained results show that the ANN-NFFE compensates for the received BER degradation and significantly increases the tolerance to the chromatic dispersion distortion.
Zhu, T; Rao, Y J; Wang, J L
2007-01-20
A novel dynamic gain equalizer for flattening Er-doped fiber amplifiers based on a twisted long-period fiber grating (LPFG) induced by high-frequency CO(2) laser pulses is reported for the first time to our knowledge. Experimental results show that its transverse-load sensitivity is up to 0.34 dB/(g.mm(-1)), while the twist ratio of the twisted LPFG is approximately 20 rad/m, which is 7 times higher than that of a torsion-free LPFG. In addition, it is found that the strong orientation dependence of the transverse-load sensitivity of the torsion-free LPFG reported previously has been weakened considerably. Therefore such a dynamic gain equalizer based on the unique transverse-load characteristics of the twisted LPFG provides a much larger adjustable range and makes packaging of the gain equalizer much easier. A demonstration has been carried out to flatten an Er-doped fiber amplifier to +/-0.5 dB over a 32 nm bandwidth.
High-speed optical phase-shifting apparatus
Zortman, William A.
2016-11-08
An optical phase shifter includes an optical waveguide, a plurality of partial phase shifting elements arranged sequentially, and control circuitry electrically coupled to the partial phase shifting elements. The control circuitry is adapted to provide an activating signal to each of the N partial phase shifting elements such that the signal is delayed by a clock cycle between adjacent partial phase shifting elements in the sequence. The transit time for a guided optical pulse train between the input edges of consecutive partial phase shifting elements in the sequence is arranged to be equal to a clock cycle, thereby enabling pipelined processing of the optical pulses.
Hyper- and hypobaric processing of Tl-Ba-Ca-Cu-O superconductors
NASA Astrophysics Data System (ADS)
Goretta, K. C.; Routbort, J. L.; Shi, Donglu; Chen, J. G.; Hash, M. C.
1989-11-01
Tl-based superconductors of initial composition Tl:Ca:Ba:Cu equal to 2:2:2:3 and 1:3:1:3 were heated in oxygen at pressures of 10(sup 4) to 6 (times) 10(sup 5) Pa. The 2:2:2:3 composition formed primarily the 2-layer superconductor with zero resistance from 77 to 104 K. The 1:3:1:3 composition formed nearly phase pure 3-layer superconductor with a maximum zero resistance temperature of 120 K. Application of hyperbaric pressure influenced phase purities and transition temperatures slightly; phase purities decreased significantly with application of hypobaric pressures.
Folding 'health' back into healthcare.
Green, David
2015-03-01
David Green, AlA, principal at the London offices of Perkins + Will, and Basak Alkan, AICP, LEED AP/healthcare district planner, at the architect, interior, and urban design company's Atlanta, US base, examine growing moves in the US to re-evaluate planning policies to ensure that local environments are built that promote healthy activities, with the creation of so-called 'Health Districts'. Equally, they explain, healthcare 'systems' are starting to see the value in using their campuses to promote this process. In the UK, they argue, 'the timing is perfect for the re-evaluation of the relationship between the medical campus and the city'.
Angle transducer based on fiber Bragg gratings able for tunnel auscultation
NASA Astrophysics Data System (ADS)
Quintela, A.; Lázaro, J. M.; Quintela, M. A.; Mirapeix, J.; Muñoz-Berti, V.; López-Higuera, J. M.
2010-09-01
In this paper an angle transducer based on Fiber Bragg Grating (FBG) is presented. Two gratings are glued to a metallic platen, one in each side. It is insensitive to temperature changes, given that the temperature shifts affect equally to both FBG. When the platen is uniformly bent an uniform strain appears in both sides of the platen. It depends on the bend angle and the platen length and thickness. The transducer has been designed to be used in the auscultation of tunnels during their construction process and during their live time. The transducer design and its characterization are presented.
Optically Phase-Locked Electronic Speckle Pattern Interferometer (OPL-ESPI)
NASA Astrophysics Data System (ADS)
Moran, Steven E.; Law, Robert L.; Craig, Peter N.; Goldberg, Warren M.
1986-10-01
This report describes the design, theory, operation, and characteristics of the OPL-ESPI, which generates real time equal Doppler speckle contours of vibrating objects from unstable sensor platforms with a Doppler resolution of 30 Hz and a maximum tracking range of + or - 5 HMz. The optical phase locked loop compensates for the deleterious effects of ambient background vibration and provides the bases for a new ESPI video signal processing technique, which produces high contrast speckle contours. The OPL-ESPI system has local oscillator phase modulation capability, offering the potential for detection of vibrations with the amplitudes less than lambda/100.
A parallel computational model for GATE simulations.
Rannou, F R; Vega-Acevedo, N; El Bitar, Z
2013-12-01
GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ibey, Bennett; Subramanian, Hariharan; Ericson, Nance; Xu, Weijian; Wilson, Mark; Cote, Gerard L.
2005-03-01
A blood perfusion and oxygenation sensor has been developed for in situ monitoring of transplanted organs. In processing in situ data, motion artifacts due to increased perfusion can create invalid oxygenation saturation values. In order to remove the unwanted artifacts from the pulsatile signal, adaptive filtering was employed using a third wavelength source centered at 810nm as a reference signal. The 810 nm source resides approximately at the isosbestic point in the hemoglobin absorption curve where the absorbance of light is nearly equal for oxygenated and deoxygenated hemoglobin. Using an autocorrelation based algorithm oxygenation saturation values can be obtained without the need for large sampling data sets allowing for near real-time processing. This technique has been shown to be more reliable than traditional techniques and proven to adequately improve the measurement of oxygenation values in varying perfusion states.
Stephan, Milena; Mey, Ingo; Steinem, Claudia; Janshoff, Andreas
2014-02-04
The passage of solutes across a lipid membrane plays a central role in many cellular processes. However, the investigation of transport processes remains a serious challenge in pharmaceutical research, particularly the transport of uncharged cargo. While translocation reactions of ions across cell membranes is commonly measured with the patch-clamp, an equally powerful screening method for the transport of uncharged compounds is still lacking. A combined setup for reflectometric interference spectroscopy (RIfS) and fluorescence microscopy measurements is presented that allows one to investigate the passive exchange of uncharged compounds across a free-standing membrane. Pore-spanning lipid membranes were prepared by spreading giant 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) vesicles on porous anodic aluminum oxide (AAO) membranes, creating sealed attoliter-sized compartments. The time-resolved leakage of different dye molecules (pyranine and crystal violet) as well as avidin through melittin induced membrane pores and defects was investigated.
Gañan, J; González, J F; González-García, C M; Cuerda-Correa, E M; Macías-García, A
2006-03-01
In this work, a pyrolysis plant located in Valverde de Leganes, Badajoz (SW Spain) was studied. At present, only the solid phase obtained by pyrolysis finds an application as domestic fuel. In order to analyze the feasibility of a further energetic exploitation of the plant under study, the gases flowing through the chimneys were collected at different times throughout the pyrolysis process. Next, they were characterized and quantified by gas chromatography, the energy potential of each of the gases being determined. According to the results obtained in this study, a total energy potential of 5.6 x 10(7) MJ (i.e., 1.78 MW(t)) might be generated yearly. Hence, considering an overall process yield equal to 20%, up to 358 KW(e) would be produced. This power would supply enough electric energy to the industry, the remaining being added to the common electric network.
Winery wastewater treatment by a combined process: long term aerated storage and Fenton's reagent.
Lucas, Marco S; Mouta, Maria; Pirra, António; Peres, José A
2009-01-01
The degradation of the organic pollutants present in winery wastewater was carried out by the combination of two successive steps: an aerobic biological process followed by a chemical oxidation process using Fenton's reagent. The main goal of this study was to evaluate the temporal characteristics of solids and chemical oxygen demand (COD) present in winery wastewater in a long term aerated storage bioreactor. The performance of different air dosage daily supplied to the biologic reactor, in laboratory and pilot scale, were examined. The long term hydraulic retention time, 11 weeks, contributed remarkably to the reduction of COD (about 90%) and the combination with the Fenton's reagent led to a high overall COD reduction that reached 99.5% when the mass ratio (R = H(2)O(2)/COD) used was equal to 2.5, maintaining constant the molar ratio H(2)O(2)/Fe(2+)=15.
Zhang, Guosong; Hovem, Jens M.; Dong, Hefeng
2012-01-01
Underwater communication channels are often complicated, and in particular multipath propagation may cause intersymbol interference (ISI). This paper addresses how to remove ISI, and evaluates the performance of three different receiver structures and their implementations. Using real data collected in a high-frequency (10–14 kHz) field experiment, the receiver structures are evaluated by off-line data processing. The three structures are multichannel decision feedback equalizer (DFE), passive time reversal receiver (passive-phase conjugation (PPC) with a single channel DFE), and the joint PPC with multichannel DFE. In sparse channels, dominant arrivals represent the channel information, and the matching pursuit (MP) algorithm which exploits the channel sparseness has been investigated for PPC processing. In the assessment, it is found that: (1) it is advantageous to obtain spatial gain using the adaptive multichannel combining scheme; and (2) the MP algorithm improves the performance of communications using PPC processing. PMID:22438755
Taillefumier, Thibaud; Magnasco, Marcelo O
2013-04-16
Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics, and industrial engineering. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied problems in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability p(t) that a Gauss-Markov process will first exceed the boundary at time t suffers a phase transition as a function of the roughness of the boundary, as measured by its Hölder exponent H. The critical value occurs when the roughness of the boundary equals the roughness of the process, so for diffusive processes the critical value is Hc = 1/2. For smoother boundaries, H > 1/2, the probability density is a continuous function of time. For rougher boundaries, H < 1/2, the probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infinity. The critical point Hc = 1/2 corresponds to a widely studied case in the theory of neural coding, in which the external input integrated by a model neuron is a white-noise process, as in the case of uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue that this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply defined times regardless of the intensity of internal noise.
Solving constrained minimum-time robot problems using the sequential gradient restoration algorithm
NASA Technical Reports Server (NTRS)
Lee, Allan Y.
1991-01-01
Three constrained minimum-time control problems of a two-link manipulator are solved using the Sequential Gradient and Restoration Algorithm (SGRA). The inequality constraints considered are reduced via Valentine-type transformations to nondifferential path equality constraints. The SGRA is then used to solve these transformed problems with equality constraints. The results obtained indicate that at least one of the two controls is at its limits at any instant in time. The remaining control then adjusts itself so that none of the system constraints is violated. Hence, the minimum-time control is either a pure bang-bang control or a combined bang-bang/singular control.
Time constant of defect relaxation in ion-irradiated 3C-SiC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, J. B.; Department of Nuclear Engineering, Texas A and M University, College Station, Texas 77843; Bayu Aji, L. B.
Above room temperature, the buildup of radiation damage in SiC is a dynamic process governed by the mobility and interaction of ballistically generated point defects. Here, we study the dynamics of radiation defects in 3C-SiC bombarded at 100 °C with 500 keV Ar ions, with the total ion dose split into a train of equal pulses. Damage–depth profiles are measured by ion channeling for a series of samples irradiated under identical conditions except for different durations of the passive part of the beam cycle. Results reveal an effective defect relaxation time constant of ∼3 ms (for second order kinetics) and a dynamicmore » annealing efficiency of ∼40% for defects in both Si and C sublattices. This demonstrates a crucial role of dynamic annealing at elevated temperatures and provides evidence of the strong coupling of defect accumulation processes in the two sublattices of 3C-SiC.« less
Time constant of defect relaxation in ion-irradiated 3 C-SiC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, J. B.; Bayu Aji, L. B.; Shao, L.
Above room temperature, the buildup of radiation damage in SiC is a dynamic process governed by the mobility and interaction of ballistically generated point defects. Here in this work, we study the dynamics of radiation defects in 3C-SiC bombarded at 100 °C with 500 keV Ar ions, with the total ion dose split into a train of equal pulses. Damage–depth profiles are measured by ion channeling for a series of samples irradiated under identical conditions except for different durations of the passive part of the beam cycle. Results reveal an effective defect relaxation time constant of ~3 ms (for secondmore » order kinetics) and a dynamic annealing efficiency of ~40% for defects in both Si and C sublattices. Finally, this demonstrates a crucial role of dynamic annealing at elevated temperatures and provides evidence of the strong coupling of defect accumulation processes in the two sublattices of 3C-SiC.« less
Diffusion theory of decision making in continuous report.
Smith, Philip L
2016-07-01
I present a diffusion model for decision making in continuous report tasks, in which a continuous, circularly distributed, stimulus attribute in working memory is matched to a representation of the attribute in the stimulus display. Memory retrieval is modeled as a 2-dimensional diffusion process with vector-valued drift on a disk, whose bounding circle represents the decision criterion. The direction and magnitude of the drift vector describe the identity of the stimulus and the quality of its representation in memory, respectively. The point at which the diffusion exits the disk determines the reported value of the attribute and the time to exit the disk determines the decision time. Expressions for the joint distribution of decision times and report outcomes are obtained by means of the Girsanov change-of-measure theorem, which allows the properties of the nonzero-drift diffusion process to be characterized as a function of a Euclidian-distance Bessel process. Predicted report precision is equal to the product of the decision criterion and the drift magnitude and follows a von Mises distribution, in agreement with the treatment of precision in the working memory literature. Trial-to-trial variability in criterion and drift rate leads, respectively, to direct and inverse relationships between report accuracy and decision times, in agreement with, and generalizing, the standard diffusion model of 2-choice decisions. The 2-dimensional model provides a process account of working memory precision and its relationship with the diffusion model, and a new way to investigate the properties of working memory, via the distributions of decision times. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Kimura, Kenta; Kimura, Motohiro; Iwaki, Sunao
2016-10-01
The present study aimed to investigate whether or not the evaluative processing of action feedback can be modulated by temporal prediction. For this purpose, we examined the effects of the predictability of the timing of action feedback on an ERP effect that indexed the evaluative processing of action feedback, that is, an ERP effect that has been interpreted as a feedback-related negativity (FRN) elicited by "bad" action feedback or a reward positivity (RewP) elicited by "good" action feedback. In two types of experimental blocks, the participants performed a gambling task in which they chose one of two cards and received an action feedback that indicated monetary gain or loss. In fixed blocks, the time interval between the participant's choice and the onset of the action feedback was fixed at 0, 500, or 1,000 ms in separate blocks; thus, the timing of action feedback was predictable. In mixed blocks, the time interval was randomly chosen from the same three intervals with equal probability; thus, the timing was less predictable. The results showed that the FRN/RewP was smaller in mixed than fixed blocks for the 0-ms interval trial, whereas there was no difference between the two block types for the 500-ms and 1,000-ms interval trials. Interestingly, the smaller FRN/RewP was due to the modulation of gain ERPs rather than loss ERPs. These results suggest that temporal prediction can modulate the evaluative processing of action feedback, and particularly good feedback, such as that which indicates monetary gain. © 2016 Society for Psychophysiological Research.
Frequency-Modulated, Continuous-Wave Laser Ranging Using Photon-Counting Detectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Barber, Zeb W.; Dahl, Jason
2014-01-01
Optical ranging is a problem of estimating the round-trip flight time of a phase- or amplitude-modulated optical beam that reflects off of a target. Frequency- modulated, continuous-wave (FMCW) ranging systems obtain this estimate by performing an interferometric measurement between a local frequency- modulated laser beam and a delayed copy returning from the target. The range estimate is formed by mixing the target-return field with the local reference field on a beamsplitter and detecting the resultant beat modulation. In conventional FMCW ranging, the source modulation is linear in instantaneous frequency, the reference-arm field has many more photons than the target-return field, and the time-of-flight estimate is generated by balanced difference- detection of the beamsplitter output, followed by a frequency-domain peak search. This work focused on determining the maximum-likelihood (ML) estimation algorithm when continuous-time photoncounting detectors are used. It is founded on a rigorous statistical characterization of the (random) photoelectron emission times as a function of the incident optical field, including the deleterious effects caused by dark current and dead time. These statistics enable derivation of the Cramér-Rao lower bound (CRB) on the accuracy of FMCW ranging, and derivation of the ML estimator, whose performance approaches this bound at high photon flux. The estimation algorithm was developed, and its optimality properties were shown in simulation. Experimental data show that it performs better than the conventional estimation algorithms used. The demonstrated improvement is a factor of 1.414 over frequency-domainbased estimation. If the target interrogating photons and the local reference field photons are costed equally, the optimal allocation of photons between these two arms is to have them equally distributed. This is different than the state of the art, in which the local field is stronger than the target return. The optimal processing of the photocurrent processes at the outputs of the two detectors is to perform log-matched filtering followed by a summation and peak detection. This implies that neither difference detection, nor Fourier-domain peak detection, which are the staples of the state-of-the-art systems, is optimal when a weak local oscillator is employed.
Feature extraction and classification algorithms for high dimensional data
NASA Technical Reports Server (NTRS)
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized. By investigating the characteristics of high dimensional data, the reason why the second order statistics must be taken into account in high dimensional data is suggested. Recognizing the importance of the second order statistics, there is a need to represent the second order statistics. A method to visualize statistics using a color code is proposed. By representing statistics using color coding, one can easily extract and compare the first and the second statistics.
Plat, Rika; Lowie, Wander; de Bot, Kees
2018-01-01
Reaction time data have long been collected in order to gain insight into the underlying mechanisms involved in language processing. Means analyses often attempt to break down what factors relate to what portion of the total reaction time. From a dynamic systems theory perspective or an interaction dominant view of language processing, it is impossible to isolate discrete factors contributing to language processing, since these continually and interactively play a role. Non-linear analyses offer the tools to investigate the underlying process of language use in time, without having to isolate discrete factors. Patterns of variability in reaction time data may disclose the relative contribution of automatic (grapheme-to-phoneme conversion) processing and attention-demanding (semantic) processing. The presence of a fractal structure in the variability of a reaction time series indicates automaticity in the mental structures contributing to a task. A decorrelated pattern of variability will indicate a higher degree of attention-demanding processing. A focus on variability patterns allows us to examine the relative contribution of automatic and attention-demanding processing when a speaker is using the mother tongue (L1) or a second language (L2). A word naming task conducted in the L1 (Dutch) and L2 (English) shows L1 word processing to rely more on automatic spelling-to-sound conversion than L2 word processing. A word naming task with a semantic categorization subtask showed more reliance on attention-demanding semantic processing when using the L2. A comparison to L1 English data shows this was not only due to the amount of language use or language dominance, but also to the difference in orthographic depth between Dutch and English. An important implication of this finding is that when the same task is used to test and compare different languages, one cannot straightforwardly assume the same cognitive sub processes are involved to an equal degree using the same task in different languages. PMID:29403404
Properties of RBSN and RBSN-SiC composites. [Reaction Bonded Silicon Nitride
NASA Technical Reports Server (NTRS)
Lightfoot, A.; Ker, H. L.; Haggerty, J. S.; Ritter, J. E.
1990-01-01
Strengths, fracture toughnesses, hardnesses, and dimensional changes have been measured for RBSN and RBSN/SiC composites. Samples were made from mixtures of Si and either Si- or C-rich SiC powders. For pure, 75 pct dense RBSN dispersed with octanol, strengths up to 858 MPa have been achieved. Improved strengths result from a combination of microstructural perfection and increased fracture toughness. The mechanical properties of the composites were approximately equal to those of methanol processed RBSN but not quite equal to those of the octanol-processed RBSN. Results are discussed in terms of observed microstructural features.
Equity, Equal Shares or Equal Final Outcomes? Group Goal Guides Allocations of Public Goods.
Kazemi, Ali; Eek, Daniel; Gärling, Tommy
2017-01-01
In an experiment we investigate preferences for allocation of a public good among group members who contributed unequally in providing the public good. Inducing the group goal of productivity resulted in preferences for equitable allocations, whereas inducing the group goals of harmony and social concern resulted in preferences for equal final outcomes. The study makes a contribution by simultaneously treating provision and allocation of a public good, thus viewing these as related processes. Another contribution is that a new paradigm is introduced that bears closer resemblance to real life public good dilemmas than previous research paradigms do.
Presi, M; Chiuchiarelli, A; Corsini, R; Choudury, P; Bottoni, F; Giorgi, L; Ciaramella, E
2012-12-10
We report enhanced 10 Gb/s operation of directly modulated bandwidth-limited reflective semiconductor optical amplifiers. By using a single suitable arrayed waveguide grating we achieve simultaneously WDM demultiplexing and optical equalization. Compared to previous approaches, the proposed system results significantly more tolerant to seeding wavelength drifts. This removes the need for wavelength lockers, additional electronic equalization or complex digital signal processing. Uniform C-band operations are obtained experimentally with < 2 dB power penalty within a wavelength drift of 10 GHz (which doubles the ITU-T standard recommendations).
Blind adaptive equalization of polarization-switched QPSK modulation.
Millar, David S; Savory, Seb J
2011-04-25
Coherent detection in combination with digital signal processing has recently enabled significant progress in the capacity of optical communications systems. This improvement has enabled detection of optimum constellations for optical signals in four dimensions. In this paper, we propose and investigate an algorithm for the blind adaptive equalization of one such modulation format: polarization-switched quaternary phase shift keying (PS-QPSK). The proposed algorithm, which includes both blind initialization and adaptation of the equalizer, is found to be insensitive to the input polarization state and demonstrates highly robust convergence in the presence of PDL, DGD and polarization rotation.
The wiper model: avalanche dynamics in an exclusion process
NASA Astrophysics Data System (ADS)
Politi, Antonio; Romano, M. Carmen
2013-10-01
The exclusion-process model (Ciandrini et al 2010 Phys. Rev. E 81 051904) describing traffic of particles with internal stepping dynamics reveals the presence of strong correlations in realistic regimes. Here we study such a model in the limit of an infinitely fast translocation time, where the evolution can be interpreted as a ‘wiper’ that moves to dry neighbouring sites. We trace back the existence of long-range correlations to the existence of avalanches, where many sites are dried at once. At variance with self-organized criticality, in the wiper model avalanches have a typical size equal to the logarithm of the lattice size. In the thermodynamic limit, we find that the hydrodynamic behaviour is a mixture of stochastic (diffusive) fluctuations and increasingly coherent periodic oscillations that are reminiscent of a collective dynamics.
Cognitive Differences in Pictorial Reasoning between High-Functioning Autism and Asperger’s Syndrome
Sahyoun, Cherif P.; Soulières, Isabelle; Belliveau, John W.; Mottron, Laurent; Mody, Maria
2013-01-01
We investigated linguistic and visuospatial processing during pictorial reasoning in high-functioning autism (HFA), Asperger’s syndrome (ASP), and age and IQ-matched typically developing participants (CTRL), using three conditions designed to differentially engage linguistic mediation or visuospatial processing (Visuospatial, V; Semantic, S; Visuospatial+Semantic, V+S). The three groups did not differ in accuracy, but showed different response time profiles. ASP and CTRL participants were fastest on V+S, amenable to both linguistic and nonlinguistic mediation, whereas HFA participants were equally fast on V and V+S, where visuospatial strategies were available, and slowest on S. HFA participants appeared to favor visuospatial over linguistic mediation. The results support the use of linguistic vs. visuospatial tasks for characterizing subtypes on the autism spectrum. PMID:19267190
Landsat thematic mapper attitude data processing
NASA Technical Reports Server (NTRS)
Sehn, G. J.; Miller, S. F.
1984-01-01
The Landsat 4 and 5 satellites carry a new, high resolution, seven band thematic mapper imaging instrument. The spacecraft also carry two types of attitude sensors: a gyroscopic internal reference unit (IRU) which senses angular rate from dc to about 2 Hz, and an AC-coupled angular displacement sensor (ADS) measuring angular deviation above 2 Hz. A description of the derivation of the crossover network used to combine and equalize the IRU and ADS data is made. Also described are the digital data processing algorithms which produce the time history of the satellites' attitude motion including the finite impulse response (FIR) implementation of G and F filters; the resampling (interpolation/decimation) and synchronization of the IRU and ADS data; and the axis rotations required as a result of the on-board sensor locations on three orthogonal axes.
Dillen, Claudia; Steyaert, Jean; Op de Beeck, Hans P; Boets, Bart
2015-05-01
The embedded figures test has often been used to reveal weak central coherence in individuals with autism spectrum disorder (ASD). Here, we administered a more standardized automated version of the embedded figures test in combination with the configural superiority task, to investigate the effect of contextual modulation on local feature detection in 23 adolescents with ASD and 26 matched typically developing controls. On both tasks both groups performed largely similarly in terms of accuracy and reaction time, and both displayed the contextual modulation effect. This indicates that individuals with ASD are equally sensitive compared to typically developing individuals to the contextual effects of the task and that there is no evidence for a local processing bias in adolescents with ASD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dooley, James H.; Lanning, David N.
Comminution process of wood veneer to produce wood particles, by feeding wood veneer in a direction of travel substantially normal to grain through a counter rotating pair of intermeshing arrays of cutting discs arrayed axially perpendicular to the direction of wood veneer travel, wherein the cutting discs have a uniform thickness (Td), to produce wood particles characterized by a length dimension (L) substantially equal to the Td and aligned substantially parallel to grain, a width dimension (W) normal to L and aligned cross grain, and a height dimension (H) aligned normal to W and L, wherein the W.times.H dimensions definemore » a pair of substantially parallel end surfaces with end checking between crosscut fibers.« less
[Prevalence of Hypertriglyceridemia: New Data Across the Russian Population. The PROMETHEUS Study].
Karpov On Behalf Of Participants Of The Prometheus Study, Yu A
2016-07-01
The main purpose of the study was to estimate prevalence of subjects with hypertriglyceridemia (HTG) in Russia. Secondary objectives were to explore HTG prevalence by levels, age and sex, and to assess correlation between glycated hemoglobin (HbA1C) and triglyceride (TG) level. Additionally, we analyzed geographical differences in HTG prevalence in regions of Russia. This was a cross-sectional, retrospective, observational study using database of results of lipid profile determination in 357,072 subjects from 254 Russian cities during the 3-year period from 2011 to 2013. Altogether, 29.2% (95% confidence interval [CI] 29.1-29.4%) of Russian individuals had HTG (serum TG more or equal 1.7 mmol/L). The percentage of patients with very high (TG more or equal 5.6 mmol/L) and severe HTG (TG more or equal 10.0mmol/L) was low (0.01% and 0.011%, respectively). At the same time, the portion of subjects with mixed hyperlipidemia (total cholesterol [TC] more or equal 5.2mmol/L, low-density lipoprotein cholesterol [LDL-C] more or equal 3.4 mmol/L, TG more or equal 1.7 mmol/L) was 19% of the study population. Men had 1.25 (95% CI, 1.24-1.26) times higher risk of HTG than women. Prevalence of HTG increased with age: in women TG level was maximal in the age group 60-69 years (34%), whereas in men TG level was maximal in the age group 40-49 years (43%). Prevalence of HTG increased from 2011 to 2013 from 28 to 30% (p<0.0001). Risk of HTG was 1.69 times greater when high HbA1C more or equal 6.5% was present, and vice versa, risk of HbA1C more or equal 6.5% was 2.04 times higher in individuals with HTG. Distribution of HTG and dyslipidemia by regions of Russia had large variability being higher in the south and lower in the northern regions of European part of Russia. Almost a third of Russian population has HTG. Men have higher risk of HTG than women. Prevalence of HTG increases with age and reaches its peak in age groups 60-69 years in (women) and 40-49 years (men). There is a linear association between high HbA- and high level of TG. Prevalence of HTG and dyslipidemia is heterogeneous in Russian regions.
Probing SEP Acceleration Processes With Near-relativistic Electrons
NASA Astrophysics Data System (ADS)
Haggerty, Dennis K.; Roelof, Edmond C.
2009-11-01
Processes in the solar corona are prodigious accelerators of near-relativistic electrons. Only a small fraction of these electrons escape the low corona, yet they are by far the most abundant species observed in Solar Energetic Particle events. These beam-like energetic electron events are sometimes time-associated with coronal mass ejections from the western solar hemisphere. However, a significant number of events are observed without any apparent association with a transient event. The relationship between solar energetic particle events, coronal mass ejections, and near-relativistic electron events are better ordered when we classify the intensity time profiles during the duration of the beam-like anisotropies into three broad categories: 1) Spikes (rapid and equal rise and decay) 2) Pulses (rapid rise, slower decay) and 3) Ramps (rapid rise followed by a plateau). We report on the results of a study that is based on our catalog (covering nearly the complete Solar Cycle 23) of 216 near-relativistic electron events and their association with: solar electromagnetic emissions, shocks driven by coronal mass ejections, models of the coronal magnetic fields and energetic protons. We conclude that electron events with time-intensity profiles of Spikes and Pulses are associated with explosive events in the low corona while events with time-intensity profiles of Ramps are associated with the injection/acceleration process of the CME driven shock.
A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments
Colburn, H. Steven
2016-01-01
Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. PMID:27698261
A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments.
Mi, Jing; Colburn, H Steven
2016-10-03
Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. © The Author(s) 2016.
ERIC Educational Resources Information Center
Burris, Christopher E.
1975-01-01
Wulff v. Singleton represents the first case in which a physician has been granted standing when his sole injury arose from the possibility that he might not be paid for performing abortions. It also represents the first time a physician, as opposed to his patient, has been held to be denied equal protection. The court's rationale is examined.…
Evaluation results for the positive deep-UV resist AZ DX 46
NASA Astrophysics Data System (ADS)
Spiess, Walter; Lynch, Thomas J.; Le Cornec, Charles; Escher, Gary C.; Kinoshita, Yoshiaki; Kochan, John; Kudo, Takanori; Masuda, Seiya; Mourier, Thierry; Nozaki, Yuko; Olson, Setha G.; Okazaki, Hiroshi; Padmanaban, Munirathna; Pawlowski, Georg; Przybilla, Klaus J.; Roeschert, Horst; Suehiro, Natusmi; Vinet, Francoise; Wengenroth, Horst
1994-05-01
This contribution emphasizes resist application site by communicating lithographic results for AZ DX 46, obtained using the GCA XLS 7800/31 stepper, NA equals 0.53, equipped with krypton fluoride excimer laser ((lambda) equals 248 nm), model 4500 D, as exposure source, delivered by Cymer Laser Technologies. As far as delay time experiments are concerned ASM-L PAS 5500/70 stepper, NA equals 0.42, was used in combination with Lambda Physik excimer laser, model 248 L.
Method for preparing ceramic composite
Alexander, K.B.; Tiegs, T.N.; Becher, P.F.; Waters, S.B.
1996-01-09
A process is disclosed for preparing ceramic composite comprising blending TiC particulates, Al{sub 2}O{sub 3} particulates and nickel aluminide and consolidating the mixture at a temperature and pressure sufficient to produce a densified ceramic composite having fracture toughness equal to or greater than 7 MPa m{sup 1/2}, a hardness equal to or greater than 18 GPa. 5 figs.
40 CFR Table 8 to Subpart Dddd of... - Continuous Compliance With the Work Practice Requirements
Code of Federal Regulations, 2010 CFR
2010-07-01
... moisture content less than or equal to 30 percent (by weight, dry basis) AND operate with an inlet dryer temperature of less than or equal to 600 °F Maintaining the 24-hour block average inlet furnish moisture... temperature of furnish moisture content and inlet dryer temperature. (2) Hardwood veneer dryer Process less...
40 CFR Table 8 to Subpart Dddd of... - Continuous Compliance With the Work Practice Requirements
Code of Federal Regulations, 2011 CFR
2011-07-01
... moisture content less than or equal to 30 percent (by weight, dry basis) AND operate with an inlet dryer temperature of less than or equal to 600 °F Maintaining the 24-hour block average inlet furnish moisture... temperature of furnish moisture content and inlet dryer temperature. (2) Hardwood veneer dryer Process less...
ERIC Educational Resources Information Center
Robins, Steven Lance; Fleisch, Brahm
2016-01-01
Hargreaves (2002) suggested that vigorous social movements have the potential to improve the quality of (and increase the equity in) public education. This paper explores the role of Equal Education, an education social movement in South Africa led by university students and secondary school learners, in the process of educational change. Drawing…
Ståhl, Christian; Müssener, Ulrika; Svensson, Tommy
2012-01-01
In 2008, time limits were introduced in Swedish sickness insurance, comprising a pre-defined schedule for return-to-work. The purpose of this study was to explore experienced consequences of these time limits. Sick-listed persons, physicians, insurance officials and employers were interviewed regarding the process of sick-listing, rehabilitation and return-to-work in relation to the reform. The study comprises qualitative interviews with 11 sick-listed persons, 4 insurance officials, 5 employers and 4 physicians (n = 24). Physicians, employers, and sick-listed persons described insurance officials as increasingly passive, and that responsibility for the process was placed on the sick-listed. Several ethical dilemmas were identified, where officials were forced to act against their ethical principles. Insurance officials' principle of care often clashed with the standardization of the process, that is based on principles of egalitarianism and equal treatment. The cases reported in this study suggest that a policy for activation and early return-to-work in some cases has had the opposite effect: central actors remain passive and the responsibility is placed on the sick-listed, who lacks the strength and knowledge to understand and navigate through the system. The standardized insurance system here promoted experiences of procedural injustice, for both officials and sick-listed persons.
Keegan, Conor; Teljeur, Conor; Turner, Brian; Thomas, Steve
2016-09-01
The determinants of consumer mobility in voluntary health insurance markets providing duplicate cover are not well understood. Consumer mobility can have important implications for competition. Consumers should be price-responsive and be willing to switch insurer in search of the best-value products. Moreover, although theory suggests low-risk consumers are more likely to switch insurer, this process should not be driven by insurers looking to attract low risks. This study utilizes data on 320,830 VHI healthcare policies due for renewal between August 2013 and June 2014. At the time of renewal, policyholders were categorized as either 'switchers' or 'stayers', and policy information was collected for the prior 12 months. Differences between these groups were assessed by means of logistic regression. The ability of Ireland's risk equalization scheme to account for the relative attractiveness of switchers was also examined. Policyholders were price sensitive (OR 1.052, p < 0.01), however, price-sensitivity declined with age. Age (OR 0.971; p < 0.01) and hospital utilization (OR 0.977; p < 0.01) were both negatively associated with switching. In line with these findings, switchers were less costly than stayers for the 12 months prior to the switch/renew decision for single person (difference in average cost = €540.64) and multiple-person policies (difference in average cost = €450.74). Some cost differences remain for single-person policies following risk equalization (difference in average cost = €88.12). Consumers appear price-responsive, which is important for competition provided it is based on correct incentives. Risk equalization payments largely eliminated the profitable status of switchers, although further refinements may be required.
Special cascade LMS equalization scheme suitable for 60-GHz RoF transmission system.
Liu, Siming; Shen, Guansheng; Kou, Yanbin; Tian, Huiping
2016-05-16
We design a specific cascade least mean square (LMS) equalizer and to the best of our knowledge, it is the first time this kind of equalizer has been employed for 60-GHz millimeter-wave (mm-wave) radio over fiber (RoF) system. The proposed cascade LMS equalizer consists of two sub-equalizers which are designated for optical and wireless channel compensations, respectively. We control the linear and nonlinear factors originated from optical link and wireless link separately. The cascade equalization scheme can keep the nonlinear distortions of the RoF system in a low degree. We theoretically and experimentally investigate the parameters of the two sub-equalizers to reach their best performances. The experiment results show that the cascade equalization scheme has a faster convergence speed. It needs a training sequence with a length of 10000 to reach its stable status, which is only half as long as the traditional LMS equalizer needs. With the utility of a proposed equalizer, the 60-GHz RoF system can successfully transmit 5-Gbps BPSK signal over 10-km fiber and 1.2-m wireless link under forward error correction (FEC) limit 10-3. An improvement of 4dBm and 1dBm in power sensitivity at BER 10-3 over traditional LMS equalizer can be observed when the signals are transmitted through Back-to-Back (BTB) and 10-km fiber 1.2-m wireless links, respectively.
Method and computer program product for maintenance and modernization backlogging
Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M
2013-02-19
According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.
Neutralization efficiency of alcohol based products used for rapid hand disinfection
Chojecka, Agnieszka; Tarka, Patryk; Kierzkowska, Anna; Nitsch-Osuch, Aneta; Kanecki, Krzysztof
Alcohols are the most commonly used active substances in preparations for quick hand disinfection. They should be bactericidal in very short contact time. PN-EN 13727 + A2: 2015-12 standard, for testing hygienic and surgical handrub disinfection preparations, provides mandatory test conditions of disinfectants in contact times with the range of 30 s to 60 s (hygienic handrub disinfection) and 60 s to 5 min (surgical handrub disinfection). A short contact times for hand hygiene products require a short time of neutralization process. For contact times less than or equal to 10 minutes, the estimated neutralization time is 10 s ± 1 s. Neutralization is a process that abolishes the action of disinfectants. Correct application of this process allows for proper use of disinfectants in practice and its biocidal effect. Objectives. Verification of the effectiveness of 10-second neutralization time of alcohol based preparations for hygienic handrub disinfection Neutralization of two products with different ethanol content (89% and 70%) for hygienic handrub disinfection according to PN-EN 13727 + A2: 2015-12 was investigated. The effectiveness of the neutralizer was assessed by determining toxicity of neutralizer, activity of residual effects of the tested products and their derivatives produced during neutralization (10 s) for test organisms (Staphylococcus aureus ATCC 6538; Pseudomonas aeruginosa ATCC 15442; Enterococcus hirae ATCC 10541; Escherichia coli K12 NCTC 10538) The 10-second neutralization time was sufficient to eliminate the residual activity of products for hygienic handrub disinfection with differentiated ethanol concentration. The neutralizer used did not show toxicity to bacteria and did not produce toxic products with tested preparations after neutralization Conclusions. The use of 10-second neutralization time allows in a precise way designate the contact times for hygienic handrub disinfection products
Grondin, Sean C.; Schieman, Colin; Kelly, Elizabeth; Darling, Gail; Maziak, Donna; Mackay, Moné Palacios; Gelfand, Gary
2013-01-01
Background The purpose of this study is to describe the demographics, training and practice characteristics of physicians performing thoracic surgery across Canada to better assess workforce needs. Methods We developed a questionnaire using a modified Delphi process to generate questionnaire items. The questionnaire was administered to all Canadian thoracic surgeons via email (n = 102) or mail (n = 35). Results In all, 97 surgeons completed the survey (71% response rate). The mean age of respondents was 47.7 (standard deviation 9.1) years; 10.3% were older than 60. Ninety respondents (88.7%) were men, 95 (81.1%) practised in English and 93 (76%) were born in Canada. Most (90.4%) had a medical school affiliation, with an equal proportion practising in community or university teaching hospitals. Only 18% of respondents reported working fewer than 60 hours per week, and 34% were on call more than 1 in 3. Three-quarters of work hours were devoted to clinical care, with the remaining time split among research, administration and teaching. Malignant lung disease accounted for 61.2% of practice time, with the remaining time equally split between benign and malignant thoracic diseases. Preoperative testing (49.4%) and insufficient operating time (49.5%) were the most common factors delaying delivery of care. More than 80% of respondents reported being satisfied with their careers, with 62.1% planning on retiring after age 60. Conclusion This survey characterizes Canadian thoracic surgeons by providing specific demographic, satisfaction and scope of practice information. Despite challenges in obtaining adequate resources for providing timely care, job satisfaction remains high, with a balanced workforce supply and demand anticipated for the foreseeable future. PMID:23883508