Present schemes for semicodeless P(Y) processing assume an autonomous receiver that estimates W-code bits directly from the received signal. Given the limited C/N0 available, naturally the signals are noisy, resulting in squaring loss.
One simple way around this is to use more reliable estimates of the W bits acquired with a medium-gain antenna. These estimates can be published continuously in near-real-time by a suitable Internet-based service, then used by any client receiver, which could be a conventional real-time receiver or a recorded-waveform software receiver.
This need not be all that expensive. All that’s required is approximately one medium-gain antenna per satellite per hemisphere. A one-meter dish of 20 dB gain would reduce squaring loss to practically zero. Of course a reasonable Internet uplink is needed of 480 kbit/second for each satellite. Storage costs could be controlled by retaining the W bits for a limited time (a few weeks or months).
Another possibility, in a world where every receiver is uploading full RF waveforms to a central service, is to sum together the signals from many receivers before estimating the W bits. (If the summing is done prior to detection, this is effectively a phased-array antenna.) Quite a few signals would be needed, though, if each receiver has the usual omni antenna. Better to rely on dedicated medium-gain antennas.
The W-code bits would not be of any use to spoofers since they would be tens or hundreds of milliseconds out of date.
The same trick can be played with other unknown codes, such as the GPS M code or the PRS codes on other GNSS services. The bit rates of the published reference waveforms would be much higher than those of W-code, but perhaps the effort would be repaid by observables that could be obtained in no other way, helping with multipath and ambiguity resolution. In the limit of a totally unknown waveform, this is just VLBI.
Most GNSS receivers process signals serially. This is natural for tracking loops based on PLLs and DLLs, as they have a feedback structure. If signals are recorded and stored, however, another viewpoint might be more flexible.
Let’s regard the recorded waveform as a series of chunks of length, say, 5 minutes. All these chunks can be processed in parallel, though at the cost of ambiguities in whole cycles of carrier phase for each chunk. (Let’s assume that acquisition or aiding has already allowed each chunk processor to start with good estimates of code phase and doppler, and that suitable guard intervals allow the tracking loops to converge somewhat in advance of the start of each chunk, so that effectively the chunks overlap a little.) Once all chunks are processed, whole cycles of carrier phase are simply cumulatively summed. This reduces the ambiguity set to the normal case of just a single ambiguity for the whole interval of the satellite pass (assuming no cycle slips or loss of lock).
So an attractive GNSS processing scenario might be:
- deposit all waveforms in a central place, such as one of the cloud computation environments like Amazon’s S3 and EC2
- do all processing of interest in parallel, by allocating as many processors as needed; place intermediate results as annotations on a common scoreboard
- coalesce the results, obtain observables, and post-process
By having the entire waveform accessible at once to a common pool of processors, a kind of annotation-based processing becomes possible. First, acquisition might be performed at fixed intervals, possibly aided by a location estimate and orbit estimates from IGS. Once the file has been annotated with acquisition results, each chunk can be tracked as outlined above. Vector tracking, differencing at the correlator level, quality monitoring, etc. can all be included as additional workflow options.
As a first step towards obtaining GPS P-code observables, it seems prudent to verify that the P code is detectable in a test recording with high C/N0.
Here’s the result with an L1 recording containing a strong signal from PRN 30 (about 50 dBHz). The peak is smeared over an extent somewhat larger than two chips, the result of some residual code doppler. Also the peak is offset by about 1 chip, again the result of residual error from the C/A acquisition (about 0.1 C/A chip). It was great to see it at all though. Certainly it proves out the P code generator.
The recording is about 1.4 seconds long and contains these data bits:
It turns out I was a bit lucky and got a complete TOW word at the end:
10001011 # Preamble
00001100000110 # TLM Message
0 # Integrity Status Flag
0 # Reserved
101011 # Parity
10010100100100101 # TOW Count (inverted: 01101011011011010)
This TOW count corresponds to Wednesday 19:40:12 in the GPS week, so the first bit of the preamble is at 19:40:06. Adding up the C/A code phase and the previous bits gives the time of the first RF sample of the file as 19:40:05.450822469 or 3375955761914 P code chips. That was the offset given to the script that produced the above plot.
I’ve put the code generators and some preliminary acquisition and tracking software in a new GitHub repository:
I’m sure it’s not as efficient as low-level code in C, but it’s nice to have concise scripts for prototyping, with everything in the Matlab-like python / numpy environment.