Skip Nav Destination
Filter
Filter
Filter
Filter
Filter

Search Results for
Frequency aliasing

Update search

Filter

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

### NARROW

Peer Reviewed

Format

Subjects

Journal

Publisher

Conference Series

Date

Availability

1-20 of 1874 Search Results for

#### Frequency aliasing

**Follow your search**

Access your saved searches in your account

Would you like to receive an alert when new items match your search?

*Close Modal*

Sort by

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2015 SEG Annual Meeting, October 18–23, 2015

Paper Number: SEG-2015-5744712

.... By low-pass filtering the under-sampled seismic data with a very low bound

**frequency**, we can get a very precise dip estimation, which will make seislet transform capable for interpolating the**aliased**seismic data. In order to prepare the optimum local slope during iterations, we update the slope field...
Abstract

Summary Interpolating regularly missing traces in seismic data is thought to be much harder than interpolating irregularly missing seismic traces, because many sparsity-based approaches can not be used due to the strong aliasing noise in the sparse domain. We propose to use seislet transform to perform a sparsity-based approach to interpolate seismic data with highly under-sampled data based on the classic projection onto convex sets (POCS) framework. Many numerical tests show that the local slope is the main factor that will affect the sparsity and anti-aliasing ability of seislet transform. By low-pass filtering the under-sampled seismic data with a very low bound frequency, we can get a very precise dip estimation, which will make seislet transform capable for interpolating the aliased seismic data. In order to prepare the optimum local slope during iterations, we update the slope field every several iterations. We also use a percentile thresholding approach to better control the reconstruction performance. Both synthetic and field examples show excellent performance using the proposed approach. Introduction Due to different reasons, seismic data may have missing traces. Seismic data reconstruction is such a procedure to remove sampling artifacts, and to improve amplitude analysis, which are very important for subsequent processing steps including high resolution processing, wave-equation migration, multiple suppression, amplitude-versus-offset (AVO) or amplitude-versusazimuth (AVAZ) analysis, and time-lapse studies (Trad et al., 2002; Liu and Sacchi, 2004; Abma and Kabir, 2005; Wang et al., 2010; Naghizadeh and Sacchi, 2010). In recent years, due to the development of compressive sensing study, there are a lot of sparsity-based methods for interpolating irregularly sampled seismic data. However, for regularly missing traces, sparsity-based methods (Abma and Kabir, 2006; Li et al., 2012, 2013; Chen et al., 2014a) can not obtain satisfied results because of the strong aliasing noise in the transform domain. Instead, the prediction-based approaches (Spitz, 1991; Naghizadeh and Sacchi, 2007) are still the best approaches for interpolating regularly missing traces. In this paper, we propose to use seislet transform to perform a sparsity-based reconstruction, based on the well-established projection onto convex sets (POCS) framework (Abma and Kabir, 2006). Many numerical studies show that the local slope is the main factor affecting the sparsity and anti-aliasing ability of the seislet transform. Even though with the original aliased data, we can not obtain precise dip estimation, we can use low-pass filtered data (below 15 Hz) to estimated local slope in order to construct the seislet transform of the full-band seismic data and perform thresholding. Synthetic data and field data examples show perfect results using the proposed approach.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2015 SEG Annual Meeting, October 18–23, 2015

Paper Number: SEG-2015-5857262

... Summary Broadband data allows for high-

**frequencies**to be recovered but due to operational and efficiency reasons the cross-line bin size leaves much of these data**aliased**. These**aliased**data, either noise or the desired signal, are filtered during processing and imaging leaving either...
Abstract

Summary Broadband data allows for high-frequencies to be recovered but due to operational and efficiency reasons the cross-line bin size leaves much of these data aliased. These aliased data, either noise or the desired signal, are filtered during processing and imaging leaving either artefacts in the case of noise or incomplete migration in the case of signal. An efficient method of acquiring dense cross-line bin sizes is proposed which relies on multiple blended sources; variable streamer separations in combination with fanmode shooting; and real-world spread movements to achieve a randomized midpoint distribution to fully sample the wavefield, within the kinematic constraints of narrowazimuth spatial sampling. Introduction Broadband acquisition has become mainstream in the industry but the potential is limited by the poor cross-line sampling that current source and streamer separations allow, which leaves high-frequencies aliased in the crossline direction. These separations are limited for efficiency reasons, that is the ability to acquire the survey in a reasonable time-frame and cost, and operational ones where the streamers are at risk of tangling, especially with long offsets, with small separations. However given the general depth of targets and the earth’s attenuation of high-frequencies the general cross-line sampling is sufficient, where it fails is for potentially high-frequency shallow targets and for high-frequency noise that is seen as near random at the target due to aliasing. To overcome the cross-line aliasing an acquisition geometry is proposed which uses multiple sources and variable streamer separations to provide small cross-line bins of 6.25m over the traditional 25m bins. The cost of this is the reduction of the acquisition footprint, for 12 streamers, from 600 to 500m. To achieve this the streamer and source separations are set such that not all bins are nominally covered, instead the system relies on real-world spread movements and fan-mode shooting to complete coverage. For the sources each of the six sub-array elements is equally spaced at 12.5m, and sub-arrays are used in more than one source to provide 5 sources spaced at 12.5m [Dunbar, J Patent US 4868793 A]. An initial test was conducted to acquire 4 lines in the proposed geometry to test the basic issues of the physical deployment and recording room setup, and the premise that bins would be sufficiently filled in real a world scenario.

Proceedings Papers

Corentin Chiffot, Anthony Prescott, Martin Grimshaw, Francesca Oggioni, Monika Kowalczyk-Kedzierska, Sharon Cooper, David Le Meur, Rodney Johnston

Publisher: Society of Exploration Geophysicists

Paper presented at the 2017 SEG International Exposition and Annual Meeting, September 24–29, 2017

Paper Number: SEG-2017-17724305

... ABSTRACT We propose a data-driven interferometry technique to remove low

**frequency****aliased**and non-conical surface waves in cross-spread domain. Despite insufficient sampling of the constructive regions in the cross-spread domain, the proposed approach has been designed for effective handling...
Abstract

ABSTRACT We propose a data-driven interferometry technique to remove low frequency aliased and non-conical surface waves in cross-spread domain. Despite insufficient sampling of the constructive regions in the cross-spread domain, the proposed approach has been designed for effective handling of any kind of 3D geometry from Narrow to Wide-Azimuth land data using prior regularization and/or densification. This implementation provides a cost-effective workflow for large datasets and produces good removal of spatially aliased non-linear surface waves with minimal primary leakage. Presentation Date: Thursday, September 28, 2017 Start Time: 8:55 AM Location: 360A Presentation Type: ORAL

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2007 SEG Annual Meeting, September 23–28, 2007

Paper Number: SEG-2007-2344

... of the traditional filtering done during this process that will improve the performance and precision of the results. In most Kirchhoff migration implementations, a triangular smoothing filter is used to avoid high

**frequency****aliasing**along the migration operator. This filter is implemented in three steps: causal...
Abstract

INTRODUCTION SUMMARY Since 3D Prestack Kirchhoff Depth Migration (KPSDM) has become one of the leading imaging tools for hydrocarbon exploration, its accurate and precise handling of the kinematical and dynamical aspects of the wavefield have become center stage to the R&D efforts worldwide. In a separate paper in this proceeding by the same author, describe a modified antialiasing filter weight that corrects for amplitude artifacts observable in earlier designs. Here we continue the efforts of developing an efficient true amplitude migration algorithm by suggesting a simplification of the traditional filtering done during this process that will improve the performance and precision of the results. In most Kirchhoff migration implementations, a triangular smoothing filter is used to avoid high frequency aliasing along the migration operator. This filter is implemented in three steps: causal integration, anti-causal integration, and Laplace-type differentiation along the diffraction stacking surface. In addition a derivative filter (known as r–filter) is applied to the input data to correct for the wavelet phase rotation introduced by the Kirchhoff summation. We will find that the standard filtering sequence of applying the r–filter, causal integration, and anti-causal integration can be replaced by just an anti–causal integration. Kirchhoff migration provides one of the best imaging solutions when data are non-uniformly distributed in space. It also is fast and flexible when it comes to input/output geometries. In our quest to make 3D KPSDM better and faster we have found an alternative to achieve a comparable result by replacing the traditional filtering sequence made of a r–filter, causal integration and anti-causal integration with a single anti-causal integration filter. Note that this is applicable to time as well as depth migration algorithms. Before the data are stacked along a diffraction surface, they are filtered for waveform phase shaping and pre–filtered in preparation for the antialiasing operation applied during summation. Since the integration process shifts the waveform shape, the r–filter works restoring it. The anti-aliasing filter is applied in three stages: causal integration, anti-causal integration and then a Laplacian computation rolling along the stacking diffraction surface. By replacing these three filters by an anti-causal integration the performance and accuracy of the algorithm can be greatly improved. The results presented here also incorporate the normalization factor in the anti-aliasing filter mentioned above and described elsewhere in these proceedings. These corrections eliminate azimuthally anisotropic amplitude behavior on the migration impulse response as well as amplitude distortions with time and offset, introduced by the traditional scaling factor in Lumley et al. (1994) and Abma et al. (1999). THREE FILTERS IN ONE Differentiation in the frequency domain can be accomplished by a multiplication by -iw, where w represents the circular frequency, and D t is the temporal sampling rate. As the sampling rate D t goes to zero the approximation becomes an equality. From this it is straight forward to conclude that the three filters: r–filter, causal integration and anti-causal integration should be equivalent to just an anti-causal integration.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2013 SEG Annual Meeting, September 22–27, 2013

Paper Number: SEG-2013-1081

... than expected

**frequencies**and**frequency****aliasing**. A 3-point derivative solution departs from ideal**frequency**response at 0.2 Nyquist while a 7-point solution departs at 0.4 Nyquist. The mitigation of these issues requires the use of a properly tapered and padded signal for calculation of the analytic...
Abstract

Summary Instantaneous attributes are commonly used in interpretation and in quantitative prediction of reservoir properties. Quantitative use of instantaneous attributes requires that they be free of systematic error. Our goal is to highlight algorithm improvements that are necessary for their quantitative use. We focus on improvements to analytic trace calculation and to the form of the derivative used in calculation of instantaneous attributes with emphasis on how that form has affect on instantaneous frequency. Our analytic trace results show that proper treatment of the Hilbert transform is necessary to prevent spectral leakage induced errors of up to 10 % in envelope. The size of the error is dependent on trace length and distance from the ends of the trace. Our results for instantaneous frequency show that it is biased by choice of the phase derivative and calculation method leading to lower than expected frequencies and frequency aliasing. A 3-point derivative solution departs from ideal frequency response at 0.2 Nyquist while a 7-point solution departs at 0.4 Nyquist. The mitigation of these issues requires the use of a properly tapered and padded signal for calculation of the analytic trace and the use of phase derivatives with higher frequency departures from the ideal response.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2008 SEG Annual Meeting, November 9–14, 2008

Paper Number: SEG-2008-2627

... filters estimated from the low-

**frequency**(alias-free) portion of the data are used to interpolate the high-**frequency**(**aliased**) data components. Several modifications to Spitz''s prediction filtering interpolation have been proposed. For instance, Porsani (1999) proposed a half-step prediction filter...
Abstract

SUMMARY The Exponentially Weighted Recursive Least Squares (EWRLS) method is adopted to estimate adaptive prediction filters for F-X seismic interpolation. Adaptive prediction filters are able to model signals where the dominant wave-numbers are varying in space. This concept leads to a F-X interpolation method that does not require windowing strategies for optimal results. Synthetic and real data examples are used to illustrate the performance of the proposed adaptive F-X interpolation method. INTRODUCTION Spitz (1991) introduced a seismic trace interpolation method that utilizes prediction filters in the frequency-space (F-X) domain. Spitz''s algorithm is based on the fact that linear events in time-space (T-X) domain map to a superposition of complex sinusoids in the F-X domain. Complex sinusoids can be reconstructed via prediction filters (autoregressive operators); this property is used to establish a signal model for F-X interpolation (Spitz, 1991) and F-X random noise attenuation (Canales, 1984; Soubaras, 1994; Sacchi and Kuehl, 2000). Spitz (1991) showed that prediction filters obtained at frequency f can be used to interpolate data at temporal frequency 2 f . Prediction filters estimated from the low-frequency (alias-free) portion of the data are used to interpolate the high-frequency (aliased) data components. Several modifications to Spitz''s prediction filtering interpolation have been proposed. For instance, Porsani (1999) proposed a half-step prediction filter scheme that makes the interpolation process more efficient. Gulunay (2003) introduced an algorithm with similarities to F-X prediction filtering with a very elegant representation in the frequencywavenumber F-K domain. Recently, Naghizadeh and Sacchi (2007) proposed a modification of F-X interpolation that allows to reconstruct data with gaps. Seismic interpolation algorithms depend on a signal model. F-X interpolation methods are not an exception to the preceding statement; they assume data composed of a finite number of waveforms with constant dip. This assumption can be validated via windowing. Interpolation methods driven by, for instance, local Radon transforms (Sacchi et al., 2004) and Curvelet frames (Herrmann and Hennenfent, 2008) assume a signal model that consists of events with constant local dip. In addition, they implicitly define operators that are local without the necessity of windowing. This is an attractive property, in particular, when compared to non-local interpolation methods (operators defined on a large spatial aperture) where optimal results are only achievable when seismic events match the kinematic signature of the operator. Examples of the latter are interpolation methods based on the hyperbolic/ parabolic Radon transforms (Darche, 1990; Trad et al., 2002) and migration operators (Trad, 2003). As we have already pointed out, F-X methods require windowing strategies to cope with continuous changes in dominant wave-numbers (or dips in T-X). In this article we propose a method that avoids the necessity of spatial windows. The proposed interpolation automatically updates prediction filters as lateral variations of dip are encountered. This concepts can be implemented in a somehow cumbersome process that requires classical F-X interpolation in a rolling window. In this paper we have preferred to use the framework of recursive least squares (Honig and Messerschmidt, 1984; Marple, 1987) to update prediction filters in a recursive fashion.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2008 SEG Annual Meeting, November 9–14, 2008

Paper Number: SEG-2008-3375

... of approximately 15,000 vibe points. High

**frequency****aliasing**was expected at**frequencies**of 145 Hz or higher for source spacing of greater than 300 ft. wamsutter field upstream oil & gas almond interval migration hornby han sugianto brian hornby interpretation main menu analysis finite...
Abstract

Summary The BP Wamsutter Seismic Integration team conducted two seismic field trials within 2006/2007- a surface seismic field trial and a borehole seismic field trial. The borehole seismic field trial consisted of a 3D VSP as well as a fourwell crosswell seismic campaign. Significantly higher frequencies (double the bandwidth) were successfully achieved with the 3D VSP as compared to existing and newly acquired surface seismic data. Prestack depth migration yielded excellent imaging results, which have allowed enhanced stratigraphic description of a very complex reservoir. Analysis, interpretation and integration of the VSP data has greatly progressed our understanding of the potential increase in the value of "designer" or fitfor- purpose seismic across the Wamsutter Field and beyond. The Wamsutter 3D VSP represents a true success story, from definition of the problem, safe field acquisition, through to data interpretation and integration. Introduction The Wamsutter Field in Wyoming (Figure 1) is a huge US onshore tight gas field, discovered in the 1960s and has been on production since the 1970s. The area covers 1600 sq miles, with surface seismic coverage of various vintages covering more than 1000 sq miles. Primary reservoirs are the Cretaceous Almond sands, with a gross interval thickness of 500 ft, at an average depth of 10,000 ft. As part of a significant 2006-2007 technology effort, the 3D VSP was acquired to further our understanding of seismic technical limits within the field, demonstrate the value of enhanced temporal resolution to reservoir characterization, as well as to test the viability of borehole seismic as a development tool for infill planning. The complex, heterogeneous, and thin-bedded nature of the reservoir sands make detailed reservoir characterization from surface seismic data extremely challenging. Recent work in 3D VSP imaging demonstrates the potential in VSP imaging (Ray et al., 2003; Paulsson et al., 2004; Hornby et al., 2005), with extensive surveys being acquired both on land and offshore. For Wamsutter, pre-acquisition 1-D modeling from existing well data allowed theoretical limits of vertical seismic resolution to be compared to existing data (Figure 2). In Figure 2, we clearly see that the 90 Hz modeled 3D VSP image has the potential to separate out individual layers in the reservoir, that are below surface seismic resolution. Pre-survey modeling and survey design As part of the survey design efforts, an extensive amount of finite difference modeling was carried out in order to understand what was required to image the reservoir, as well as to optimize the acquisition parameters. Parameters that were evaluated in the modeling included tool location with reference to the target interval, receiver interval, source spacing and offset and maximum migration frequency. From the finite difference modeling, the "perfect design" would have shots at 200 ft spacing out to an offset of 14,000 ft. This in effect would mean a source effort of approximately 15,000 vibe points. High frequency aliasing was expected at frequencies of 145 Hz or higher for source spacing of greater than 300 ft.

Proceedings Papers

Publisher: Offshore Technology Conference

Paper presented at the Offshore Technology Conference, May 6–9, 1985

Paper Number: OTC-4864-MS

... into an interpretable format. Marine three-dimensional (3-D) seismic surveys traditionally have been planned and executed using rectilinear grids with line-to-line spacing sufficiently close (25 to 75 meters) to avoid spatial-

**frequency****aliasing**in the crossline direction. Designing field procedures to meet...
Abstract

ABSTRACT A major portion of the relatively high cost associated with the acquisition of marine three dimensional (3-D) seismic data can be attributed to the use of closely spaced (25 to 75 meter) parallel lines. The consequence of this procedure is a large amount of non-productive field time encountered while the recording vessel maneuvers between successive lines. Redesigning marine 3-D acquisition and processing procedures to utilize overlapping circular traverses greatly reduces the amount of nonproductive time and introduces a corresponding and significant cost reduction. Technical advantages that can translate into improved final data quality are also provided. Field operating and data processing procedures have been developed which have led to the successful completion of an experimental marine 3-D survey using overlapping circular traverses. INTRODUCTION When weighing the merits of acquiring a marine three-dimensional (3-D) survey, two major considerations are the cost of the survey and the total time required to record and process these data into an interpretable format. Marine three-dimensional (3-D) seismic surveys traditionally have been planned and executed using rectilinear grids with line-to-line spacing sufficiently close (25 to 75 meters) to avoid spatial-frequency aliasing in the crossline direction. Designing field procedures to meet these requirements has been governed by various philosophies. These considerations are streamer tracking and control, absolute reference position and acceptable relative positions for each compass and ultimately each geophone group center. Initially, the idealized configuration was thought to require the streamer tracking directly behind the towing vessel. The 3-D data binning operation for any particular bin would collect only data recorded by a single shot line. Ocean currents make this procedure very difficult at best. Adherence to the concept required rejecting a significant amount of data and almost always involved the elimination of far offset or intermediate offset data. This produced an undesirable distribution of offsets within each bin. The subsequent practice of binning data from adjacent lines into a regular grid frequently failed to achieve reasonable and uniform common midpoint coverage with acceptable distributions of source to receiver offset distances. Field specifications and procedures regarding acceptable streamer feathering were later modified to achieve and accept a larger degree of streamer feathering to provide better source to receiver offset distribution within anyone line. Data collected could now originate from several lines. Some binning procedures were designed to follow curved paths in order to obtain better coverage and offset distribution. The procedures generally followed today are a combination of the above ideas augmented in some cases by partial prestack migration. During this evolutionary period, navigation or steerage reference points have ranged from navigation antenna, center of energy source array, midpoint between source and first geophone group, first geophone group or some selected point along the marine streamer. The method chosen for any specific survey has been governed by a particular philosophy, theory, experience or operational conditions such as sea state or currents.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1990 SEG Annual Meeting, September 23–27, 1990

Paper Number: SEG-1990-0579

... magnetometer readings through

**aliasing**of high**frequency**magnetic interference. A set of standard techniques and calibrated displays was devised to allow meaningful comparison of noise levels between different magnetometer sensors, data acquisition systems and installations. INTRODUCTION Magnetometers continue...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1999 SEG Annual Meeting, October 31–November 5, 1999

Paper Number: SEG-1999-1134

... ev en ts from the migrated image. Gra y(1992)proposed an approach whic hremoved only the

**frequency**components that are**aliased**. Gra y created multiple v ersions of eac h input trace ltered with various high-cut lters. When the migration required a certain**frequency**limit, the ltered version...Journal Articles

Journal:
Journal of Petroleum Technology

Publisher: Society of Petroleum Engineers (SPE)

*J Pet Technol*50 (01): 42.

Paper Number: SPE-0198-0042-JPT

Published: 01 January 1998

... 1998. Offshore Technology Conference

**frequency**antialia filter geometry Reservoir Characterization node length equal noise limitation acquisition group interval attenuate nyquist limit**aliased**energy Upstream Oil & Gas wave-number domain noise contribute nyquist boundary...
Abstract

This article is a synopsis of paper OTC 8319, "Can Poor 3D Sampling Lead to Successful Results ?," by K.J. Davies and Gary Hampson, Texaco Ltd., originally presented at the 1997 Offshore Technology Conference, Houston, 5-8 May.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2008 SEG Annual Meeting, November 9–14, 2008

Paper Number: SEG-2008-2551

... SUMMARY A rigorous, explicit antialias spatial filter is designed and applied to remove energy above the first Nyquist wavenumber in the horizontal slowness-

**frequency**domain. The antialias filter removes the spatially**aliased****frequencies**selectively at each slowness; conventional antialias...
Abstract

SUMMARY A rigorous, explicit antialias spatial filter is designed and applied to remove energy above the first Nyquist wavenumber in the horizontal slowness-frequency domain. The antialias filter removes the spatially aliased frequencies selectively at each slowness; conventional antialias lowpass frequency filtering under- or over-corrects for spatial aliasing at all slownesses. A seismic gather can be spatially dealiased only at the expense of wavelet spectral changes; it does not preserve amplitude variations with offset. INTRODUCTION Spatial aliasing is a consequence of undersampling seismic data in space during acquisition. Aliasing can be associated with steep structural dips, low interval velocities or low surface wave velocities (Claerbout, 1985; Yilmaz, 2001; Yu et al., 2007). Spatial aliasing is a problem for prestack processes like dip moveout and migration (Peacock, 1982; Bardan, 2004; Yu et al., 2007). Thus, there exists a need for a method that can separate aliased and unaliased energy. Spitz''s (1991), Claerbout''s (1992) and Gulunay''s (2003) Fourier transform-based dealiasing interpolation methods are robust, but require equally spaced input traces, whereas local slantstack methods (Turner, 1990; Marfurt et al., 1996; Abma and Kabir, 2005) have no restriction for input trace spacing but are sensitive to the interpolation operator. Yu et al. (2007) dealiased and interpolated seismic data in the wavelet-Radon domain but require unaliased slownesses to be present in the data, and the signal must be consistent across wavelet scales. All the interpolation-based dealiasing methods provide qualitatively satisfactory, but indirect, treatments of aliasing. In the present work, a direct dealias spatial filter is proposed in the horizontal wave-slownesss (px) and frequency (f ) domain. SYNTHETIC EXAMPLE A synthetic aliased acoustic (x-t) common shot gather with 37 traces (Figure 1a) is generated for a model with two flat reflectors. The geophone spacing is 112 m (so the first Nyquist wavenumber = 4.46 s/km) and the time sampling interval is 1 ms (so the Nyquist frequency = 500 Hz). In Figure 1a, the first (shallowest) reflection has more aliased energy as it is steeper (i.e. has higher p values) than the second reflection. The input (Figure 1a), the px-tau (Figure 2a), and the px- f (Figure 2d) gathers show that aliased energy is present at both small and large p values. In the plane wave decomposition (Figure 2a), a curved reflection event is formed by constructive interference of a large number of plane waves, so the dominant energy of each reflection has a range of p>x values. The curved reflection trajectories need to be divided into different offset windows because not all px values are aliased at all offsets and there is an overlap between the unaliased energy at some p values and the aliased energy at the same p values generated at the other offsets. The procedure to spatially dealias the synthetic aliased input gather is the following; first, the input data are divided into non-overlapping offset windows based on the change in the px values across the gather; the optimal window width is a Fresnel zone.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2002 SEG Annual Meeting, October 6–11, 2002

Paper Number: SEG-2002-1232

... migration more than alternative schemes. The analysis covers the following topics: - Spurious Differences -

**Aliasing**: Temporal; Spatial - Wavelet Changes During Migration:**Frequency**, Velocity & Offset Dependent - Kirchhoff Migration as a Stacking Process: Travel-Time Sampling Errors; Sensitivity to Velocity...
Abstract

Summary It is sometimes remarked that pre-stack Kirchhoff depth migrated images have a lower frequency content than their time-domain counterparts. Here we assess the various factors that influence frequency content during migration, with the object of assessing the reasons for potential loss of bandwidth in migrated data. We demonstrate that there is no inherent reason for the bandwidth of Kirchhoff (or depth) migrated data to be worse than other migrated data, and offer recommendations for ensuring optimal frequency content in the processed output image.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2004 SEG Annual Meeting, October 10–15, 2004

Paper Number: SEG-2004-2013

... the interpolant using 512 Fourier coefficients. It obeys both orthogonality condition and stability condition. Moreover, it has more zero-cross points than necessary, which can be used for de-

**aliasing**. Based on the above analysis, an adapted**frequency**range can be used to design the interpolants on any irregular...
Abstract

ABSTRACT Seismic data regularization, which aims to estimate the seismic traces on a spatially regular grid using the acquired irregular sampled data, is an interpolation/extrapolation problem. Sampling theory offers the basic conditions for all the seismic data regularization implementations. In sampling theory, Fourier transform plays a crucial role in the analysis of the reconstruction/interpolation basis (interpolant); it estimates the frequency components in frequency/wave-number domain, and its inverse transform creates the seismic data on the desired regular grid. Difficulties arise from the non-orthogonality of the global Fourier basis on an irregular grid, which results in the energy from one frequency component leaks onto others. This well-known phenomenon is called “spectral leakage”. The updated Fourier transform: Anti-leakage Fourier transform (ALFT) offers to overcome the above mentioned difficulties. It estimates the spatial frequency content on an irregularly sampled grid with significantly reduced frequency leakage. In this paper, we investigate the properties of ALFT and give an insight on how it works. The interpolants are numerically calculated and analyzed in detail. The orthogonality condition of the interpolants is discussed, which demonstrates that the ALFT data reconstruction meets the two most important interpolation conditions (e.g. orthogonal condition and unity condition). With the amplitude analysis on interpolants, the stability of ALFT algorithm is also addressed.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1995 SEG Annual Meeting, October 8–13, 1995

Paper Number: SEG-1995-1373

... mask the reflection events almost completely. Groundroll is characterized by low velocity and low

**frequency**and very often shows spatial**aliasing**. The groundroll persists despite the shot and geophone patterns employed during data acquisition. In these situa- tions stacking fails to remove...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the SEG International Exposition and Annual Meeting, October 11–16, 2020

Paper Number: SEG-2020-3421471

... Temporal

**aliasing**occurs when a waveform is sampled with less than two points per time period for a signal at a given**frequency**. This insufficiently sampled**frequency**will incorrectly be mapped into a lower (**aliased**)**frequency**. Analog to this, spatial**aliasing**is said to occur when a propagating...
Abstract

Temporal aliasing occurs when a waveform is sampled with less than two points per time period for a signal at a given frequency. This insufficiently sampled frequency will incorrectly be mapped into a lower (aliased) frequency. Analog to this, spatial aliasing is said to occur when a propagating waveform is measured at spatial intervals larger than half the wavelength of any given signal in that waveform. Temporally aliased frequencies cannot be recovered. On the other had, we argue that ”spatial aliasing” can be viewed as an expression of a nonuniqueness for estimating the direction of the propagation for signal at a given frequency, and may be overcome when threecomponent seismic sensors are used. Realizing this allows for using higher frequencies, and therefore enables the generation of higher resolution images from the data. This is particularly useful for borehole-seismic data which tend to contain higher frequencies than surface-seismic data. With the new technology of Distributed Acoustic Sensing, using backscattering in optical fibers, one can relatively inexpensively build large well-bore arrays of single-component sensors; however spatial aliasing will be a problem if only single components are measured. Complementing a fiber-optical sensor array with sparsely distributed three-component sensors would resolve the directional ambiguities of the fiber-optical sensor data. Presentation Date: Tuesday, October 13, 2020 Session Start Time: 8:30 AM Presentation Time: 11:25 AM Location: 360D Presentation Type: Oral

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2015 SEG Annual Meeting, October 18–23, 2015

Paper Number: SEG-2015-5829920

... the highest

**frequency**content, a sparser than needed horizontal wave-propagation grid is often used to make high-**frequency**RTM affordable. As a result, the theoretically alias-free RTM operator suffers from**aliasing**issues when applied to high-**frequency**data with steep surface angles. To solve this**aliasing**...
Abstract

Summary If the migration frequency is high (e.g., 50 Hz), reverse time migration (RTM) can be computationally very expensive and hardware demanding for large 3D data sets with large apertures. For this reason, while the vertical wave-propagation grid is chosen to be dense enough to hold the highest frequency content, a sparser than needed horizontal wave-propagation grid is often used to make high-frequency RTM affordable. As a result, the theoretically alias-free RTM operator suffers from aliasing issues when applied to high-frequency data with steep surface angles. To solve this aliasing issue, we propose first decomposing the input shot gathers of the common-shot RTM into the plane-wave domain using sparse inversion and then applying surface-angle-dependent anti-aliasing filters to individual plane-wave coefficients before transforming them back to the spatial domain. Using 2D synthetic and 3D field data examples, we demonstrate that our method allows RTM to migrate data with a frequency higher than the Nyquist frequency imposed by the horizontal wave-propagation grid without much suffering from aliasing problems. Introduction Aliasing in seismic processing can be broadly classified into three types: data aliasing, migration operator aliasing, and imaging aliasing. We focused on migration operator aliasing and data aliasing. Aliasing issues in a Kirchhoff migration operator can be solved either by interpolating input data to a denser grid or applying anti-aliasing filters during the migration (Gray, 1992; Lumley et al., 1994; Abma et al., 2005; Zhang et al., 2001). RTM is performed in the frequency domain either explicitly (Larson, 1999) or implicitly (Zhang et al., 2007) and thus is alias-free when the wave-propagation grid in all three spatial directions is dense enough to hold the highest frequency content in the input data (Gray, 2013). If the migration frequency is high (e.g., 50 Hz), RTM is computationally very expensive and hardware demanding for large 3D data sets with large apertures. To make highfrequency RTM affordable or possible at all, one commonly-adopted strategy is to use an uneven spatial grid in RTM wave propagation: the vertical grid is dense enough (e.g., <10 m) to hold the highest frequency content in the input data, whereas the horizontal grid is chosen to be coarser (e.g., 50 m × 50 m). By doing this, the computational cost and memory usage can be significantly reduced, and the majority of the high-frequency events with small surface angles can still be correctly propagated and migrated despite the sparse horizontal grid. However, highfrequency reflection data with steep surface angles will suffer from aliasing issues that degrade the RTM images.

Proceedings Papers

Tomohide Ishiyama, Derrick John Painter, Kamal Belaid, Stephan Gazet, Ahmed S. Al Suwaidi, Joe Karwatowski, Mohamed Mahgoub, Keiichi Furuya, Alaa El-Dakhakhny, Rick Sinno

Publisher: Society of Petroleum Engineers (SPE)

Paper presented at the Abu Dhabi International Petroleum Exhibition and Conference, November 3–6, 2008

Paper Number: SPE-117915-MS

... applied to the separate hydrophone and geophone data sets before PZ summation were effective in removing a significant amount of noise. However remnant noise was still present after PZ summation. This noise included high-

**frequency**residual**aliased**water-borne noise, receiver coupling and remnant coherent...
Abstract

Abstract A large 3D OBC seismic survey was conducted offshore Abu Dhabi, United Arab Emirates. This survey covered an undeveloped Lower Cretaceous Thamama field and a producing Middle Cretaceous Mishrif field and was highly-specified to achieve the different development objectives of the individual fields. Data processing was particularly challenging. In addition to the mud-roll and trapped mode noises, inter-bed multiples and acquisition footprint that is associated with OBC data acquired offshore Abu Dhabi, there was a significant difference in raw field data quality between the two fields despite uniform processing that was applied in an attempt to obtain a seamless image across the area. A single set of parameters was applied after extensive testing in both fields using well synthetics and VSP's as to guide parameter selection at key steps in the processing flow. A conservative methodology was adopted for noise attenuation. The objective was to peal noises from the data without touching signal. Multiple targeted noise attenuation processes focused on specific noise types, were applied as opposed to a few harsh filters that addressed multiple noise types in the same step that could produce a uniform but low-resolution data set across both areas. Much of the coherent noise attenuation was performed on the separate P and Z components prior to PZ summation. The processed final data revealed clear features: continuous reflections and discontinuous fault trends; channel-like-features in overburden; and reefal reservoir edge; which were not mapped in previous 2D seismic surveys. However, some objectives were not fully resolved and have to be addressed in any future re-processing: difference in data quality between the two fields such as S/N ratio, signal frequency bandwidth and wavelet; remnant acquisition footprint; and limited resolution and offset range. The marked contrast in data quality between the two fields was interpreted as being related to variations in near surface geologies. A separate targeted processing in each individual field could address these problems. The lesson learnt during the processing will help any future processing in a carbonate field of offshore Abu Dhabi, United Arab Emirates. Introduction A large 3D OBC seismic survey was conducted offshore Abu Dhabi, United Arab Emirates. This survey covered an undeveloped Lower Cretaceous Thamama field (Field A) and a producing Middle Cretaceous Mishrif field (Field B). The outlines of the two fields overlap. The two-way time to the top Thamama reservoir is approximately 1.6 sec and the top Mishrif 1.4 sec. The survey objectives for the two fields were different. Field-A has a subtle low relief structure at Thamama level. In addition, channel-like features filled with high velocity material are present in the overburden at the Upper Cretaceous Fiqa level. These features cause a two-way time pull-up below the high-velocity channel anomalies and have a significant affect on the time structure interpretation. The loose grid of 2D seismic data that was previously available lacked the dense areal coverage to map the field-wide distribution of these features. Field-B is located east of and partially overlaps field-A. Field-B has a structure at the shallower Mishrif level. The reservoir is a reefal facies which changes stratigraphically toward the western flank. No Mishrif reservoir has been confirmed to date in field-A, however the western edge of the reefal facies has not yet been defined. In addition, the structure is highly undulated due to erosional and karstic topography formed during sea-level fall after reefal facies deposition.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1995 SEG Annual Meeting, October 8–13, 1995

Paper Number: SEG-1995-0041

..., synchronizing the free-running signatures with the seismic source, filtering out unwanted

**frequencies**for improving S/N ratio, and preventing**frequency****aliasing**. The beauty and efficiency of the method are reflected by the fact that these multiple functions are achieved 41 2 Downhole seismic data acquisition...
Proceedings Papers

Paper presented at the The 29th International Ocean and Polar Engineering Conference, June 16–21, 2019

Paper Number: ISOPE-I-19-673

... the auxiliary noise analysis method with the

**frequency**analysis method to eliminate the modal**aliasing**. The modified EMD method is applied to the shock response signal analysis. By studying each eigenfunction, the components and characteristics of shock response acceleration signal are analyzed, which provides...
Abstract

ABSTRACT This paper aims to introduce a new method for the analysis of non-linear and non-stationary signals. This method is based on empirical mode decomposition technology by decomposing signals into multiple eigenfunctions. But there is modal aliasing problem in this method. We combine the auxiliary noise analysis method with the frequency analysis method to eliminate the modal aliasing. The modified EMD method is applied to the shock response signal analysis. By studying each eigenfunction, the components and characteristics of shock response acceleration signal are analyzed, which provides a reference for the analysis of shock signal of underwater explosion. INTRODUCTION Under the underwater explosion load, the ship structure produces a response signal with a large bandwidth of frequency spectrum. The signal spectrum is generally complex. A better time-frequency analysis method is necessary to analyze the signal characteristics and obtain the real information of the signal. A new method based on the empirical mode decomposition (EMD) technology is proposed in this paper. The nonlinear signal analysis program is compiled based on this method. Vibration signal is decomposed into several eigenfunctions. The Hilbert-Huang transform method based on EMD is an advanced method, but EMD method can cause modal aliasing in time-frequency analysis. In this paper, the ensemble empirical mode decomposition (EEMD) method is used to solve this problem. Combined with auxiliary noise analysis method, white noise with uniform spectrum distribution is added to the analysis signal, so that white noise and signal are uniformly distributed. The signal components include the signal itself and the white noise signal. Because the mean value of the white noise signal is zero, the noise signals will counteract with each other after several iterative operations. Only the original signal itself is stable, and the mode aliasing is eliminated. In this paper, the response of the ship structure subjected to the underwater explosion load is simulated. Signal analysis program based on the EEMD method was compiled. The proposed program was used to analyze the structure response signal, and the time-frequency information of the signal was obtained. By studying each eigenfunction, the components and characteristics of the shock acceleration signal of structure response were analyzed. The proposed program provides a reference for the analysis and study of the shock signal of ships subjected to underwater explosion.

Advertisement