Giter Site home page Giter Site logo

k-planes's Introduction

K-Planes: Explicit Radiance Fields in Space, Time, and Appearance

Where we develop an extensible (to arbitrary-dimensional scenes) and explicit radiance field model which can be used for static, dynamic, and variable appearance datasets.

Code release for:

K-Planes: Explicit Radiance Fields in Space, Time, and Appearance

Sara Fridovich-Keil*, Giacomo Meanti*, Frederik Rahbæk Warburg, Benjamin Recht, Angjoo Kanazawa

πŸš€ Project page

πŸ“° Paper

πŸ“ Raw output videos and pretrained models

nerfaccIntegration with NerfAcc library for even faster training

nerfaccIntegration with NerfStudio for easier visualization and development

Setup

We recommend setup with a conda environment using PyTorch for GPU (a high-memory GPU is not required). Training and evaluation data can be downloaded from the respective websites (NeRF, LLFF, DyNeRF, D-NeRF, Phototourism).

Training

Our config files are provided in the configs directory, organized by dataset and explicit vs. hybrid model version. These config files may be updated with the location of the downloaded data and your desired scene name and experiment name. To train a model, run

PYTHONPATH='.' python plenoxels/main.py --config-path path/to/config.py

Note that for DyNeRF scenes it is recommended to first run for a single iteration at 4x downsampling to pre-compute and store the ray importance weights, and then run as usual at 2x downsampling. This is not required for other datasets.

Visualization/Evaluation

The main.py script also supports rendering a novel camera trajectory, evaluating quality metrics, and rendering a space-time decomposition video from a saved model. These options are accessed via flags --render-only, --validate-only, and --spacetime-only, and a saved model can be specified via --log-dir.

License and Citation

@inproceedings{kplanes_2023,
      title={K-Planes: Explicit Radiance Fields in Space, Time, and Appearance},
      author={{Sara Fridovich-Keil and Giacomo Meanti} and Frederik Rahbæk Warburg and Benjamin Recht and Angjoo Kanazawa},
      year={2023},
      booktitle={CVPR}
}

Note: Joint first-authorship is not fully supported in BibTex; you may need to modify the above depending on your format.

This work is made available under the BSD 3-clause license. Click here to view a copy of the license.

k-planes's People

Contributors

frederikwarburg avatar sarafridov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

k-planes's Issues

Query Regarding 'bds.npy' File in K-Planes Dataset

I've been exploring the datasets utilized within K-Planes and noticed the presence of a file named 'bds.npy'. Unfortunately, I couldn't find any documentation or explanation regarding the purpose or creation process of this specific file.(i assumed bds contains min and max depth of every image)

Could someone please provide insights into the nature of the 'bds.npy' file? I'd appreciate guidance on how to generate or create this 'bds.npy' file for my own dataset. Is this reliant on COLMAP, or are there alternative methods or tools that can be used to generate this file?

Any clarification, documentation, or step-by-step instructions on creating the 'bds.npy' file would be immensely helpful. Thank you in advance for any assistance or information provided.

How to obtain feature maps?

Hi, @sarafridov, Thank you for you great work!

In the paper, you mentioned the features of a 4D coordinate are obtained from the k-planes, but I'm curious how you obtain the feature maps of k-planes?

Thank you!

Question about depth calculation.

Thanks for your awesome work. Recently I've been working with depth in NeRF and I found that your calculation of depth is different from the original NeRF. I wonder why do you add a one_minus_transmittance on to the depth.
Thanks a lot.

How to set the value of "scene_bbox" on a custom dataset?

Thank you for your excellent work!
I have encountered a problem when applying the code to custom datasets. How can I set the value of "scene_bbox" on a custom dataset? And how are the values of "scene_bbox" in the provided configuration file determined?
Looking forward to your answer.

IndexError at "video_datasets.py"

Dear authors,
I'm really appreciated for your excellent work and neat code.
I'm currently running k-planes with my multiview dataset after converting to a format that is compatible with DyNeRF.
I've tried two personal datasets with the default "dynerf_hybrid.py" configurations.
In the case of Dataset 1(consisting of nine views with a resolution of 1920x1088 per view), it works fine until 90001 iterations
However, in the case of Dataset 2(consisting of sixteen views with a resolution 1920x1088 per view), it throws out the following error
at the iterations equal to "ist_step".

File "D:\ETRI\2023\Immersive\Standardization\INVR\K-Planes-main\plenoxels\datasets\video_datasets.py", line 268, in getitem
"timestamps": self.timestamps[index], # (num_rays or 1, )
IndexError: index 858331320 is out of bounds for dimension 0 with size 501350400

Am I running something wrong?
Thank you very much.

Question about test K-planes with zju-dataset

Hi,Dr. Fridovich-Keil. Thanks for your great work!
Right now I am facing some issue when i train and rendering the zju-dataset with K-planes(a human motion dataset).
The render result shows below.
zjuresult

After doing some research, I realize that this issue may raised from incorrect transformation matrix coordinate. Since zju-dataset is using opencv coordinate.
So I am just curious which camera extrinsic coordinate is used by K-plane: Pytorch3D or OpenGL?

Thanks!

monocular real scene video

i tried to train from one monocular real scene video, i tried D-NeRF and DyNeRF config, but results are unsatisfying.
has it the capacity to train from just one monocular real scene video? like nsff?

spacetime_.mp4
spacetime_.mp4

little question

Thanks for your greatful work !!
Did you use methods like 'instant-ngp' or something else to speed up training and rendering?

Bug in depth rendering?

I think that there is a bug in the current depth rendering:

one_minus_transmittance = torch.sum(weights, dim=-2)
depth = torch.sum(weights * steps, dim=-2) + one_minus_transmittance * rays_d[..., -1:]

I believe it should actually be:

one_minus_transmittance = torch.sum(weights, dim=-2).clamp(min=1e-6)
depth = torch.sum(weights * steps, dim=-2)/one_minus_transmittance

Training on Custom Data

Thanks for your great work.
I try to run K-Planes my own dataset, that is, video sequences from Neuman dataset.
However, I got nan loss after about 10000 steps.
Here is my training config file, which is modified from dynerf_hybrid config. I only change the scene_box and training steps.
Would you give me some advice about which loss or operation of K-Planes may cause the invalid nan output?

config = {
 'expname': 'neuman_bike',
 'logdir': './logs_neuman/bike',
 'device': 'cuda:0',

 # Run first for 1 step with data_downsample=4 to generate weights for ray importance sampling
 'data_downsample': 1,
 'data_dirs': ['/opt/data/llff_data/neuman_bike/'],
 'contract': False,
 'ndc': True,
 'ndc_far': 2.6,
 'near_scaling': 0.95,
 'isg': False,
 'isg_step': -1,
 'ist_step': 50000,
 'keyframes': False,
 'scene_bbox': [[
      -10.939920205538575,
      -2.0469914783289735,
      -1.0306140184402466
    ],
    [
      7.077569125469017,
      1.5071640571195142,
      12.159653639708578
    ]
 ],
 # Optimization settings
 'num_steps': 200001,
 'batch_size': 4096,
 'scheduler_type': 'warmup_cosine',
 'optim_type': 'adam',
 'lr': 0.004,

 # Regularization
 'distortion_loss_weight': 0.001,
 'histogram_loss_weight': 1.0,
 'l1_time_planes': 0.0001,
 'l1_time_planes_proposal_net': 0.0001,
 'plane_tv_weight': 0.0002,
 'plane_tv_weight_proposal_net': 0.0002,
 'time_smoothness_weight': 0.001,
 'time_smoothness_weight_proposal_net': 1e-05,

 # Training settings
 'save_every': 20000,
 'valid_every': 20000,
 'save_outputs': True,
 'train_fp16': False,

 # Raymarching settings
 'single_jitter': False,
 'num_samples': 48,
 'num_proposal_samples': [256, 128],
 'num_proposal_iterations': 2,
 'use_same_proposal_network': False,
 'use_proposal_weight_anneal': True,
 'proposal_net_args_list': [
  {'num_input_coords': 4, 'num_output_coords': 8, 'resolution': [128, 128, 128, 150]},
  {'num_input_coords': 4, 'num_output_coords': 8, 'resolution': [256, 256, 256, 150]}
 ],

 # Model settings
 'concat_features_across_scales': True,
 'density_activation': 'trunc_exp',
 'linear_decoder': False,
 'multiscale_res': [1, 2, 4, 8],
 'grid_config': [{
  'grid_dimensions': 2,
  'input_coordinate_dim': 4,
  'output_coordinate_dim': 16,
  'resolution': [64, 64, 64, 150]
 }],
}

OOM issue

Hi! Thanks for sharing awesome work.

I'm trying to train a model with dynerf data, but I keep encountering an OOM issue.

( Adjusted down_sample and num_steps in config for initial dynerf training )

'save_outputs': True,
 'scene_bbox': [[-3.0, -1.8, -1.2], [3.0, 1.8, 1.2]],
 'scheduler_type': 'warmup_cosine',
 'single_jitter': False,
 'time_smoothness_weight': 0.001,
 'time_smoothness_weight_proposal_net': 1e-05,
 'train_fp16': True,
 'use_proposal_weight_anneal': True,
 'use_same_proposal_network': False,
 'valid_every': 30000}
2023-06-23 04:43:45,251|    INFO| Loading Video360Dataset with downsample=4.0
Loading train data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19/19 [00:41<00:00,  2.20s/it]2023-06-23 04:44:53,937|    INFO| Computed 1953572400 ISG weights in 24.48s.
killed

When checked with dmesg, the following error appeared:

[2916867.742639] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=user.slice,mems_allowed=0-1,global_oom,task_memcg=/user.slice/user-1004.slice/session-1589.scope,task=python,pid=3555848,uid=1004
[2916867.742721] Out of memory: Killed process 3555848 (python) total-vm:167336276kB, anon-rss:123357620kB, file-rss:4kB, shmem-rss:8kB, UID:1004 pgtables:243752kB oom_score_adj:0
[2916871.967326] oom_reaper: reaped process 3555848 (python), now anon-rss:0kB, file-rss:0kB, shmem-rss:8kB

question about the effect of the T dimension

I tried to verify the effect of the T dimension to the result. Therefore I fixed T dimension values of the girds' parameters(pretrained parameters for both Kplane grids and proposal_network grids) after t0 to find out whether the scene can be stopped at t0. The ideal results should be that the scene mantains the states at t0. But the results were not ideal as showed in the video attactments, in which one stopped at beginning, the other stopped at half time.
I wonder whether there are other time-relevent parts I missed.
The raw output videos have some issues about playing on website, so please download it to play.

https://drive.google.com/drive/folders/1rjbIvXhAksc1KftudVy0HHSAZr4bhyIx?usp=share_link

Visualization of training views

I found that in videodataset, train dataset returns a batchsize of rays while test dataset returns rays of a whole image.
But I want to check out the reconstruction quality of the training views(not just mse or PSNR informations), when the training was done.
I tried to modify the __getitem__ method of the videodataset by ignoring the if self.split == 'train', but the output is still not correct.
So I wonder if there is an easy way to achieve it.

Model size

Thanks for the great work!
I just find the model size of trained K-Planes is much larger than the reported size in the paper.
The parameters of proposal grid are not included in the reported results in the paper, which could lead to the different model size? Looking forward to your reply.

training on black background videos

Hi, thanks for your great work! I'm trying to play with the repo, and I found that when I trained the model with RGBA images as follows, it worked fine.
r_000

But when I replace the data with RGB files as follows, the novel view synthesis results are broken.
r_000
I tried to set bg_ground color as 0 here, but then it turned out that the network can't convergence during training (everything is black). What might be the solution?

Question about result of DyNeRF dataset : a difference between the results in the paper and my results

Hi,

Thank you for sharing your work and the interesting model you've developed.
I hope that your model becomes the new standard for Dynamic NeRF models.

I trained the K-planes hybrid version using the DyNeRF dataset, following your instructions without making any other changes to the configuration. Here's what I did:

  1. Precomputed IST & ISG weights by setting downsample=4, global_step=1.
  2. Trained the model by setting downsample=2, global_step=90001 back again.

For training, I used one A100 32GB and torch=1.10.1.

Then, I evaluated models with --validate-only and --log-dir /path/to/dir_of_model

I compared my results with the results from the supplementary paper, and I marked my results in red.

image

There seems to be a mismatch in the metrics, and the performance appears to be worse than MixVoxel-L.
My question is, how can I achieve the same results as described in the paper?

Best regards.

Visualization/Evaluation ERROR

When I execute the command β€œPython plenoxels/main. py -- render only -- log_ Dir /home/...” I receive an error of "No module named 'plexixels' during".
When I execute the command 'sys. path. append ('/home/.../K-Planes main '), I encounter the error' Illegal instruction (core dumped) 'again
May I ask what is causing this

Toy scene of kplanes

Could you please share the toyscene dataset on your project page?
I tired use kplanes to reconstruct simple toy scene created by kubric, however kplanes is prone to overfitting.
I tried to offer more data, set the scene static, simplify the model parameter, but it didn't work.
So I wonder whether there is something wrong with the custom dataset or kplanes fails to reconstruct low-texture scenes.
The toy scne on the project page.
image
The train(left) and test(right) result of the custom data.
imageimage

Mismatch in total variation loss between description in paper and the implementation

Hi,
Thanks for open-sourcing the code for your work.

In the paper, you have mentioned that total variation in space is encouraged in "1D along spatial dimensions in the space-time planes" and "2D along both spatial dimensions in the space-space planes". However, in the code, total variation loss is applied along 2 dimensions for all the planes including space-time planes. Further, the total variation loss is applied identically twice for the space-space planes. Can you please clarify why?

How to train DyNeRF scenes ?

Hi, thanks for providing the code ! I am trying to train DyNeRF scenes but I keep on stucking at the dataloader part.

 'isg': False,
 'isg_step': -1,
 'linear_decoder': False,
 'logdir': './logs/realdynamic',
 'ndc': True,
 'num_samples': 48,
 'num_steps': 90001,
 'optim_type': 'adam',
 'plane_tv_weight': 0.0002,
 'plane_tv_weight_proposal_net': 0.0002,
 'proposal_net_args_list': [{'num_input_coords': 4,
                             'num_output_coords': 8,
                             'resolution': [128, 128, 128, 150]},
                            {'num_input_coords': 4,
                             'num_output_coords': 8,
                             'resolution': [256, 256, 256, 150]}],
 'save_every': 30000,
 'save_outputs': True,
 'scene_bbox': [[-3.0, -1.8, -1.2], [3.0, 1.8, 1.2]],
 'scheduler_type': 'warmup_cosine',
 'single_jitter': False,
 'time_smoothness_weight': 0.001,
 'time_smoothness_weight_proposal_net': 1e-05,
 'train_fp16': True,
 'use_proposal_weight_anneal': True,
 'use_same_proposal_network': False,
 'valid_every': 30000}
2023-01-25 15:42:09,846|    INFO| Loading Video360Dataset with downsample=2.0
Loading train data:  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž                                                                   | 10/19 [00:43<00:34,  3.84s/it

I saw the recommendation to train these scenes with data_downsample=4 for one step. How exactly can I do that ? Should I manually change 'num_steps': 1 ?

No such file or directory: 'tmp'

Hi.. right from the off i am getting the error..

File "D:\plenoxels\main.py", line 14, in get_freer_gpu
memory_available = [int(x.split()[2]) for x in open('tmp', 'r').readlines()]
FileNotFoundError: [Errno 2] No such file or directory: 'tmp'

thanks
Howie

used coordinate system in Phototourism

Hello,

I'm currently grappling with understanding the optimal coordinate system for rendering in the context of the phototourism architecture. I've made adjustments to the coordinate system, particularly due to the realization that a z-forward axes configuration is being used.

To illustrate, the image linked below demonstrates my modification to the coordinate system:
z-forward
corrected

In the visual, the bottom points represent coordinates for the Brandenburg Gate, while the top points denote my custom coordinates and their respective coordinate system.
this is some example:

step30000-18

step30000-18-depth

<title></title>
<meta name="generator" content="LibreOffice 6.0.7.3 (Linux)"/>
<style type="text/css">
	body,div,table,thead,tbody,tfoot,tr,th,td,p { font-family:"Liberation Sans"; font-size:x-small }
	a.comment-indicator:hover + comment { background:#ffd; position:absolute; display:block; border:1px solid black; padding:0.5em;  } 
	a.comment-indicator { background:red; display:inline-block; border:1px solid black; width:0.5em; height:0.5em;  } 
	comment { display:none;  } 
</style>
,mse_None,psnr_None,ssim_None,ms-ssim_None Β 
0,0.005098310299217701,23.351652693560833,0.6523159677399757,0.7991683880488077 Β 

Questions about planes

Thanks for your great work!
I'm doing exploratory experiments for my dissertation.
For static scenes, I have some confusion about the chosen three planes.
In the paper, three mutually orthogonal planes of x=0, y=0 and z=0 are used, and good results are obtained.
I made a linear weighted combination of the xyz coordinate axes to get three planes that are not orthogonal to each other, such as these three planes: x+y+z=0, x+y=0, y=0. Experimental findings seem to still achieve good results.
However, when I do non-linear combinations (such as multiplication and division) of xyz coordinate axes, such as x/z, I find that the effect becomes bad, with blurring and artifacts appear. I would like to ask what is going on here? Do you have an idea?
Thank you again.

Acquiring requirement of the model

Hi,
@sarafridov Thanks for sharing the model!
I'm pretty interested for your paper, Could you please provide the detail of the installation environment for reproduce the experiment?
Really appreciated!

Yours,
Thomas

some wrong with training D-NeRF datasets

i downloaded a D-NeRF dataset to train as monocular video. using D-NeRF/dnerf_explicit.py as config.
But it said that i miss the file "poses_bounds.npy" in dataset.
did i download a wrong D-NeRF dataset? What should i do?

more informations:

error:
image

where i download the dataset:
https://github.com/albertpumarola/D-NeRF
image

dataset construction:
image
information of the poses of camera is in .json files i thought

About average_poses function?

I've always had a doubt. Could you explain why normalize(np.cross(z, y_)) is not -x?The coordinate system of poses is "right up back". Looking forward your reply.
image

question about the implementation of the 2D grid

hello, thanks for your nice work. When read your code, I have a question about your implementation of 2D grid. In your code, you use pytorch to create the grid volume. And I notice you use the tcnn module to build some MLPs. I notice that the tcnn module has implement multi-resolution hash grid, so what the different between them ?

spacetime-only and render-only and test video

i find that result from render-only or spacetime-only is much more better than test video created during training.

is that normal?

step150000:

step150000_.mp4

render-only:

render.mp4

(ps. im testing to only use 2 views(2 origin video) to train)

DyNeRF implementation

Hey @sarafridov,
I am impressed with the results you've got with the K-Planes.
Within the paper, you compare your results with DyNeRF model. I'm curious to experiment with the DyNeRF model, so can you please provide your implementation of DyNeRF?

Thank you!

Inconsistent Results Despite Setting Random Seed

Hello,

I am encountering an issue with inconsistent results when running the code, despite setting the random seed for reproducibility. I have tried using the original random seed setting code provided in your repository However, I am still facing variable outcomes in repeated runs.

The original seed setting code in your repository is:
β€˜β€™β€˜python
def _set_random_seed(seed) -> None:
"""Set randomness seed in torch and numpy"""
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
β€™β€˜β€™
I am using a single GPU for experiments of static scene in nerf-synthetic dataset. I suspect there might be other parts of the code or dependencies that introduce randomness.

Could you please help me identify any potential sources of variability or suggest further modifications to ensure consistent results?

Thank you for your assistance.

About spacetime_.mp4

After I run
PYTHONPATH='.' python plenoxels/main.py --config-path plenoxels/configs/final/DyNeRF/dynerf_explicit.py --render-only

There is a .mp4 file in the .log folder

2023-02-17 02:39:46,848|    INFO| Saving video (300 frames) to ./logs/realdynamic/cutbeef_explicit/spacetime_.mp4
2023-02-17 02:40:00,764|    INFO| Saved rendering path with 300 frames to ./logs/realdynamic/cutbeef_explicit/spacetime_.mp4

But this openned video file is like this
image

Replicating Fig 8 from the paper

Hi,

I am a bit stuck on how to visualise the results as was done in Figure 8 in the paper - i.e. visualising xt (and yt, zt). Is there a function which I may have missed?

If not:
I've currently modified plenoxels.models.kplanes_field.interpolate_ms_features function to evaluate a selected grid (e.g. grid (0, 2), which is one of the time-plane grid) rather than all the grids. The code then interpolates the different scales and renders the result, which I assumed was the xt plane. I'm not really sure this is correct, so I'm wondering if there are any better approaches?

Image Height and Width Flipped for D-NeRF scenes

I've been messing about with 360 in-the-wild datasets using D-NeRF data format and I believe I found a minor bug with setting the image height and width during data loading.

From plenoxels.datasets.data_loading.py

def _load_nerf_image_pose(...):
    ...
    if out_h is None:
        out_h = int(img.size[0] / downsample) # Should be img.size[1]
    if out_w is None:
        out_w = int(img.size[1] / downsample) # Should be img.size[0]

This doesn't affect square images (i.e. the original D-NeRF dataset) or K-Planes paper results, though for anyone looking to use their own datasets might be a useful note!

Performance on TanksandTemple dataset

Dear author, thanks for open source your work!

I am currently using Kplanes on TanksandTemple datasets with both nerf explicit and hybrid hyper parameters. However, the scene reconstruction quality is not too good (very blurry background and cannot render the object of interest). I am wondering if you have experience with your method on TanksandTemple dataset? I appreciate it if you can provide an insight.

Thank you
Screenshot 2023-08-22 at 1 03 20 pm
step30000-1

About Bilinear interpolation?

Dynamic models will have 6 planes: # (y, x), (z, x), (t, x), (z, y), (t, y), (t, z). Why not set it to (x, y), (x, z), (x, t), (y, z), (y, t), (z, t)?

During bilinear interpolation:For example: coo_comb = (0,1), you get coordinates is (x, y),but the grid is (y, x). Is there any problem with this operation? It makes me doubt it.
image

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.