Giter Site home page Giter Site logo

Comments (9)

mdenbina avatar mdenbina commented on July 16, 2024

Hi, that's a curious error message. It seems like the code is having a hard time reading the SLCs with the right byteoffset. I think there are a couple possibilities, but it's hard for me to debug without seeing the folder containing the UAVSAR data files you downloaded.

One option is that the annotation file ("pongara_..._01_BU.ann") does not match the SLC files that you downloaded. There are two sets of files on the site, baseline uncorrected ("BU") or baseline corrected ("BC"), and you need a consistent set of files for both .ann and .slc, otherwise the code could get confused. Another option is that the values in the annotation file for the number of rows and/or columns in the SLCs got edited or corrupted in some way. But if you have the "BC" SLCs downloaded, make sure to use the "BC" annotation file, for example.

Another option is that you downloaded the segment 1 SLC files but not the segment 2 SLC files, and it is having trouble loading those. You have to click "Segment 2" and download that set of files as well, if those are missing:
segment2

A third option is that you set the azbounds=() or rngbounds=() arguments in kapok.uavsar.load to values that are incompatible with the data, but if you copied the command from the manual that shouldn't be the issue.

If you could show me the list of files you have in your data folder that would help me debug. Another option is that add a line to kapok.uavsar.getslcblock before this line:

slc = np.memmap(file, dtype='complex64', mode='c', offset=byteoffset, shape=(azend-azstart,rngsize))

You could add a line to print the byteoffset, and see what it is.

Another issue is that in the Cython code, the stacktrace is usually less clear than in native Python. So you could use the native Python version of kapok.uavsar.load, like so:

import kapok.uavsarp
scene = kapok.uavsarp.load(...)

Instead of kapok.uavsar.load. It's the same function, it just doesn't use Cython. So it will be slow, but can be useful when debugging issues like this.

Hope this helps, let me know how it goes.

from kapok.

yamanidev avatar yamanidev commented on July 16, 2024

Thank you for replying!

The way I had the set up is by placing all of the files in one folder, but then it said that there are exactly 41 files with the same names. I noticed that out of these 41 files, only one of them was of different size between the same files. I assumed that since all the other 40 files with the same name have the same size, that they're simply duplicates of the same content. Is that assumption correct?

How can I make use of the files of segment 1 and segment 2 without having them all under one directory? Is there a way to separate them, say with a folder structure that contains main.py, segment1/ and segment2/?

Again, thank you infinitely!

from kapok.

mdenbina avatar mdenbina commented on July 16, 2024

No problem. So I think I can guess what might be the issue. The files have the same size because they are binary images with the same dimensions. However, each of them has unique contents and they are not interchangeable. The .slc files represent the single-look complex radar imagery for each channel and flight track. All of them should have the same dimensions and therefore the same file size. If one of them has a smaller file size, maybe there was a problem with the download for that file? Could you try re-downloading that file and see if you get a file with the same file size as the others? That might explain the error if one of the files is smaller than expected. In that case, the calculated byteoffset to start reading from could be larger than the actual file size.

The segment 1 and segment 2 files will have slightly different filenames (*_s1 compared to *_s2), for example:

pongar_TM275_16009_002_160227_L090HH_01_BC_s1_1x1.slc
pongar_TM275_16009_002_160227_L090HH_01_BC_s2_1x1.slc

So you should be able to put them all in the same path. The software expects all the data files to be in the same path, unfortunately it won't work if they are in subfolders for the different segments.

Best regards,

Michael

from kapok.

yamanidev avatar yamanidev commented on July 16, 2024

This was the only file out of the 41 that had a different size than the other file with the same name:

Screenshot 2023-06-19 144504

I am currently redownloading the data. You're saying that they should all be able to be placed in one directory without naming conflicts?

Edit: it seems that the annotation files are identical?

from kapok.

mdenbina avatar mdenbina commented on July 16, 2024

Ah, yes, the annotation files and the .baseline files are the same for both segments. In general, where the file names are identical, then they are the same (they contain information for both segments). You can indeed place all the files into the same directory without naming conflicts. I thought when you said one of the files was smaller it was a .slc file, not a .baseline file. In that case, I am not sure why you get that error when running getslcblock() if all the SLC files are present and are the correct size.

from kapok.

yamanidev avatar yamanidev commented on July 16, 2024

We've redownloaded everything from scratch and started over, we still get the same error. We've triple checked the files, and they're all present within one directory.

It always raises an error when reaching 25%. Do you have any idea what is causing this? Is this error always related to reading files?

Should we keep these settings the same? We haven't changed them.
image

from kapok.

mdenbina avatar mdenbina commented on July 16, 2024

I've run the code on this dataset many times and never seen this error so I'm still not sure what is causing it. But I have only run it on Mac OS X and CentOS Linux, never on Windows. Did you try running using kapok.uavsarp.load() instead of kapok.uavsar.load() like I suggested? I would be interested in seeing the traceback from kapok.uavsarp.load().

You can indeed keep all those settings the same. Those should be fine.

You could try setting azbounds=(0,1000) for example in the call to load() and see if it works. It will only load a portion of the data so you will not get the entire dataset. But since the function fails around 25%, you could use this to test if the rest of the function will complete if only loading the first small fraction of the data.

Another thing to try might be setting num_blocks=50 (or other large number) in the call to load(). This will split the SLCs into smaller chunks when trying to form the covariance matrix. That might make loading the data easier.

from kapok.

huxainsen avatar huxainsen commented on July 16, 2024

Changing num_blocks to 50 still gives an error

from kapok.

Keli233 avatar Keli233 commented on July 16, 2024

My issue has been resolved. I am running this on a Linux platform, and the error was due to the size of an SLC file. One of the SLC files was too small, so after re-downloading it and running the process again, the error did not occur.
我的这个问题解决了,我在linux平台上运行,错误的原因是slc文件大小的问题,其中有一个slc文件偏小,所以我重新下载之后,重新运行,这个错误没有出现。

from kapok.

Related Issues (1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.