Giter Site home page Giter Site logo

neural_nano-optics's People

Contributors

ethan-tseng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

neural_nano-optics's Issues

Problem with training

Hello, thank you very much for the code, but I found some problems while using it.
At that time when I was training with the source code and data, I found that the result of the generator was all white.
Can you tell me what could be the reason for this?

The best loss weighting coefficients

Since thess is no description about the best loss weighting coefficients {λ1, λperc, λgrad} used in training, I wonder whether it is the value set in the ”run_train.sh“ that λ1=1.0, λperc=0.1 and λgrad=0.0.

Mapping the phase distribution to the physical structure of the lens

From your published paper, I can see the lens parameters that you've trained are as follows:

phase_initial = np.array([-0.3494864 , -0.00324192, -1., -1.,-1., -1., -1., -1.], dtype=np.float32)

Plugging these parameters into the phase distribution polynomial yields the phase distribution. However, I'm wondering how to map this phase distribution to the physical structure of the lens. Could you provide some guidance and references on how to proceed with this mapping?

Some questions regarding hyperparameters

Hi teams, thanks for sharing your brilliant work!

I have some problems with hyperparameters and hope you can save my day.

  1. What is the total iteration? By default, args.steps is defined as 1 billion, but I think it cannot be done in 18 hours. Or is there any early stopping implemented?
  2. What are lambda for loss formulation? In other issue, you recommended to find best lambdas, but I wonder which lambdas can produce reported PSNRs in the paper.
  3. And when I clone the code and train the model, the codes run well without any errors but the model is not trained well. I think this is because some settings are different from the paper (i.e snr is not optimized, phase is initialized as log-asphere like, not zeros, phase_iter is 0 and G_iter is 10 ...). Are those default settings intended but am I doing something wrong? Actually I changed some settings and try to simulate the paper-like setting, but I still fail to train the model.

Thanks!!

Some questions about make_propagator in solver.py

I'm new to the field of Fourier Optics. I couldn't understand this function since I can't match it to any kind of diffraction. Can anyone have some formulas here to explain it ?

def make_propagator(params):
  
  batchSize = params['batchSize']
  pixelsX = params['pixelsX']
  pixelsY = params['pixelsY']
  upsample = params['upsample']

  # Propagator definition.
  k = 2 * np.pi / params['lam0'][:, 0, 0]
  k = k[:, np.newaxis, np.newaxis]
  samp = params['upsample'] * pixelsX
  k = tf.tile(k, multiples = (1, 2 * samp - 1, 2 * samp - 1))
  k = tf.cast(k, dtype = tf.complex64)  
  k_xlist_pos = 2 * np.pi * np.linspace(0, 1 / (2 *  params['Lx'] / params['upsample']), samp)  
  front = k_xlist_pos[-(samp - 1):]
  front = -front[::-1]
  k_xlist = np.hstack((front, k_xlist_pos))
  k_x = np.kron(k_xlist, np.ones((2 * samp - 1, 1)))
  k_x = k_x[np.newaxis, :, :]
  k_y = np.transpose(k_x, axes = [0, 2, 1])
  k_x = tf.convert_to_tensor(k_x, dtype = tf.complex64)
  k_x = tf.tile(k_x, multiples = (batchSize, 1, 1))
  k_y = tf.convert_to_tensor(k_y, dtype = tf.complex64)
  k_y = tf.tile(k_y, multiples = (batchSize, 1, 1))
  k_z_arg = tf.square(k) - (tf.square(k_x) + tf.square(k_y))
  k_z = tf.sqrt(k_z_arg)

  # Find shift amount
  theta = params['theta'][:, 0, 0]
  theta = theta[:, np.newaxis, np.newaxis]
  y0 = np.tan(theta) * params['f']
  y0 = tf.tile(y0, multiples = (1, 2 * samp - 1, 2 * samp - 1))
  y0 = tf.cast(y0, dtype = tf.complex64)

  phi = params['phi'][:, 0, 0]
  phi = phi[:, np.newaxis, np.newaxis]
  x0 = np.tan(phi) * params['f']
  x0 = tf.tile(x0, multiples = (1, 2 * samp - 1, 2 * samp - 1))
  x0 = tf.cast(x0, dtype = tf.complex64)

  propagator_arg = 1j * (k_z * params['f'] + k_x * x0 + k_y * y0)
  propagator = tf.exp(propagator_arg)

  return propagator

Training-related issues

Thanks to the authors for sharing such fantastic work!

  1. I've been attempting to replicate your article, and I've followed the configuration outlined in the supplementary materials: PHASE_ITERS=10, G_ITERS=100, METASURFACE=neural. I trained the code on a single RTX 3090 GPU, but I've consistently observed that the loss function fails to decrease, oscillating between 0.7 and 0.8. I'm wondering if I used the correct configuration.
  2. By reading through the code, I've noticed that the parameters are updated only once after training PHASE_ITERS or G_ITERS, i.e. batch_size=1. Could it be possible that the loss oscillation is due to the batch size being too small?

I want to ask you some questions.

嗨~ I am studying your article, but for a beginner, there are too many things I don't understand. I want to ask you the details of the specific mapping between the phase function and the scatterer structure.Thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.