Giter Site home page Giter Site logo

neural-ilt's Introduction

Neural-ILT

Neural-ILT is an end-to-end learning-based mask optimization tool developed by the research team supervised by Prof. Evangeline F.Y. Young in The Chinese University of Hong Kong (CUHK). Neural-ILT attempts to replace the conventional end-to-end ILT (inverse lithography technology) correction process under a holistic learning-based framework. It conducts on-neural-network ILT correction for the given layout under the guidiance of a partial coherent imaging model and directly outputs the optimized mask at the convergence.

Compared to the conventional academia ILT solutions, e.g., MOSAIC (Gao et al., DAC'14) and GAN-OPC (Yang et al., TCAD'20), Neural-ILT enjoys:

  • much faster ILT correction process (20x ~ 70x runtime speedup)
  • better mask printability at convergence
  • modular design for easy customization and upgradation
  • ...

More details are in the following papers:

Requirements

  • python: 3.7.3
  • pytorch: 1.8.0
  • torchvision: 0.2.2
  • cudatoolkit: 11.1.1
  • pillow: 6.1.0
  • GPU: >= 10GB GPU memory for pretrain, >= 7GB for Neural-ILT
  • [This repo passes the test on a linux machine of Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-158-generic x86_64) & CUDA Version: 11.4]

Usage

Step 1: Download the source codes. For example,

$ git clone https://github.com/cuhk-eda/neural-ilt.git

Step 2: Go to the project root and unzip the environment

$ cd Neural-ILT/
$ unzip env.zip

(Optional) To replace the ICCAD'20 training dataset with the ISPD'21 training dataset (last batch)

$ cd Neural-ILT/dataset/
$ unzip ispd21_train_dataset.zip

Step 3: Conduct Neural-ILT on ICCAD 2013 mask optimization contest benchmarks

$ cd Neural-ILT/
$ python neural_ilt.py

Note that we observed minor variation (±0.5%) on mask printability score (L2+PVB, statistics of 50 rounds). We haven't yet located the source of non-determinism. We would appreciate any insight from the community for resovling this non-determinism ✨.

Step 4 (optional): Backbone model pre-training

$ cd Neural-ILT/
$ python pretrain_model.py

Evaluation: Evaluate the mask printability

$ cd Neural-ILT/
$ python eval.py --layout_root [root_to_layout_file] --layout_file_name [your_layout_file_name] --mask_root [root_to_mask_file] --mask_file_name [your_mask_file_name]

Parameters

 |── neural_ilt.py
 |   ├── device/gpu_no: the device id
 |   ├── load_model_name/ilt_model_path: the pre-trained model of Neural-ILT
 |   ├── lr: initial learning rate
 |   ├── refine_iter_num: maximum on-neural-network ILT correction iterations
 |   ├── beta: hyper-parameter for cplx_loss in the Neural-ILT objective
 |   ├── gamma: lr decay rate
 |   ├── step_size: lr decay step size
 |   ├── bbox_margin: the margin of the crop bbox
 |
 |── pretrain_model.py
 |   ├── gpu_no: the device id
 |   ├── num_epoch: number of training epochs
 |   ├── alpha: cycle loss weight for l2
 |   ├── beta: cycle loss weight for cplx
 |   ├── lr: initial learning rate
 |   ├── gamma: lr decay rate
 |   ├── step_size: lr decay step size
 |   ├── margin: the margin of the crop bbox
 |   ├── read_ref: read the pre-computed crop bbox for each layout
 |
 |── End

Expolre your own recipe for model pretraining and Neural-ILT. Have fun! 😄

Acknowledgement

We wolud like to thank the authors of GAN-OPC (Yang et al., TCAD'20) for providing the training layouts used in our ICCAD'20 paper. Based on which, we further generated the ISPD'21 training layouts following the procedure described in Jiang et al., ISPD'21.

Contact

Bentian Jiang ([email protected]) and Lixin Liu ([email protected])

Citation

If Neural-ILT is useful for your research, please consider citing the following papers:

@inproceedings{jiang2020neural,
  title={Neural-ILT: migrating ILT to neural networks for mask printability and complexity co-optimization},
  author={Jiang, Bentian and Liu, Lixin and Ma, Yuzhe and Zhang, Hang and Yu, Bei and Young, Evangeline FY},
  booktitle={2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD)},
  pages={1--9},
  year={2020},
  organization={IEEE}
}
@inproceedings{jiang2021building,
  title={Building up End-to-end Mask Optimization Framework with Self-training},
  author={Jiang, Bentian and Zhang, Xiaopeng and Liu, Lixin and Young, Evangeline FY},
  booktitle={Proceedings of the 2021 International Symposium on Physical Design},
  pages={63--70},
  year={2021}
}
@article{jiang2021neural,
  title={Neural-ILT 2.0: Migrating ILT to Domain-specific and Multi-task-enabled Neural Network},
  author={Jiang, Bentian and Liu, Lixin and Ma, Yuzhe and Yu, Bei and Young, Evangeline FY},
  journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},
  year={2021},
  publisher={IEEE}
}

License

READ THIS LICENSE AGREEMENT CAREFULLY BEFORE USING THIS PRODUCT. BY USING THIS PRODUCT YOU INDICATE YOUR ACCEPTANCE OF THE TERMS OF THE FOLLOWING AGREEMENT. THESE TERMS APPLY TO YOU AND ANY SUBSEQUENT LICENSEE OF THIS PRODUCT.

License Agreement for Neural-ILT

Copyright (c) 2021, The Chinese University of Hong Kong All rights reserved.

CU-SD LICENSE (adapted from the original BSD license) Redistribution of the any code, with or without modification, are permitted provided that the conditions below are met.

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  3. Neither the name nor trademark of the copyright holder or the author may be used to endorse or promote products derived from this software without specific prior written permission.

  4. Users are entirely responsible, to the exclusion of the author, for compliance with (a) regulations set by owners or administrators of employed equipment, (b) licensing terms of any other software, and (c) local, national, and international regulations regarding use, including those regarding import, export, and use of encryption software.

THIS FREE SOFTWARE IS PROVIDED BY THE AUTHOR "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR ANY CONTRIBUTOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, EFFECTS OF UNAUTHORIZED OR MALICIOUS NETWORK ACCESS; PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

neural-ilt's People

Contributors

infamousmega avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural-ilt's Issues

Why gamma=4?

Dear authors,

Hi, thanks for the work. I noticed that when you compute the ILT loss, the value of gamma you used is 4, instead of commonly used value of 2:
ilt_loss = (result - target).pow(4).sum()

Is there a specific reason that this value is used?

Thanks!

Runtime Error

Hi Authors, thanks for the work. I am trying to run the example using python .\neural_ilt.py, but I got the following error information:

RuntimeError: Cannot initialize CUDA without ATen_cuda library. PyTorch splits its backend into two shared libraries: a CPU library and a CUDA library; this error has occurred because you are trying to use some CUDA functionality, but the CUDA library has not been loaded by the dynamic linker for some reason. The CUDA library MUST be loaded, EVEN IF you don't directly use any symbols from the CUDA library! One common culprit is a lack of -INCLUDE:?warp_size@cuda@at@@yahxz in your link arguments; many dynamic linkers will delete dynamic library dependencies if you don't depend on any of their symbols. You can check if this has occurred by using link on your binary to see if there is a dependency on *_cuda.dll library.

My environment (Win64, anaconda, 3090-Ti) is as follows:

blas                      1.0                         mkl
ca-certificates           2022.12.7            h5b45459_0    conda-forge
certifi                   2022.12.7          pyhd8ed1ab_0    conda-forge
cudatoolkit               11.1.1              heb2d755_10    conda-forge
eigen                     3.3.7                h59b6b97_1
glib                      2.69.1               h5dc1a3c_2
gst-plugins-base          1.18.5               h9e645db_0
gstreamer                 1.18.5               hd78058f_0
hdf5                      1.12.1               h1756f20_2
icc_rt                    2022.1.0             h6049295_2
icu                       58.2                 ha925a31_3
intel-openmp              2021.4.0          haa95532_3556
jpeg                      9e                   h2bbff1b_0
lerc                      3.0                  hd77b12b_0
libclang                  12.0.0          default_h627e005_2
libdeflate                1.8                  h2bbff1b_5
libffi                    3.4.2                hd77b12b_6
libiconv                  1.16                 h2bbff1b_2
libogg                    1.3.5                h2bbff1b_1
libpng                    1.6.37               h2a8f88b_0
libprotobuf               3.20.1               h23ce68f_0
libtiff                   4.4.0                h8a3f274_2
libvorbis                 1.3.7                he774522_0
libwebp                   1.2.4                h2bbff1b_0
libwebp-base              1.2.4                h2bbff1b_0
libxml2                   2.9.14               h0ad7f3c_0
libxslt                   1.1.35               h2bbff1b_0
lz4-c                     1.9.4                h2bbff1b_0
mkl                       2021.4.0           haa95532_640
mkl-service               2.4.0            py37h2bbff1b_0
mkl_fft                   1.3.1            py37h277e83a_0
mkl_random                1.2.2            py37hf11a4ad_0
numpy                     1.21.5           py37h7a0a035_3
numpy-base                1.21.5           py37hca35cd5_3
opencv                    4.6.0            py37h104de81_2
opencv-python             4.7.0.68                 pypi_0    pypi
openssl                   1.1.1s               h2bbff1b_0
pcre                      8.45                 hd77b12b_0
pillow                    6.1.0                    pypi_0    pypi
pip                       22.3.1           py37haa95532_0
python                    3.7.3                h8c8aaf0_1
qt-main                   5.15.2               he8e5bd7_7
qt-webengine              5.15.9               hb9a9bb5_4
qtwebkit                  5.212                h3ad3cdb_4
setuptools                65.5.0           py37haa95532_0
six                       1.16.0             pyhd3eb1b0_1
sqlite                    3.40.0               h2bbff1b_0
torch                     1.8.0                    pypi_0    pypi
torchvision               0.2.2                    pypi_0    pypi
tqdm                      4.19.9                   pypi_0    pypi
typing-extensions         4.4.0                    pypi_0    pypi
vc                        14.2                 h21ff451_1
vs2015_runtime            14.27.29016          h5e58377_2
wheel                     0.37.1             pyhd3eb1b0_0
wincertstore              0.2              py37haa95532_2
xz                        5.2.8                h8cc25b3_0
zlib                      1.2.13               h8cc25b3_0
zstd                      1.5.2                h19a0ad4_0

Is there any reason for the error? Thanks!

origin of the kernels

The .pt files seems to be the kernels related to the lithography system, is there any code to generate them instead of simply loading?

Inquiry Regarding Image Scaling and Cropping Methodology

Hello Neural-ILT team,

I've been delving into the Neural-ILT project and am quite impressed with the work you've achieved. While examining the image processing steps taken prior to inputting images into the network, I noted a distinct approach to scaling and cropping. This approach piqued my interest, as it appears to diverge from traditional methods that typically involve resizing images to a uniform dimension to preserve scale, followed by a process of random cropping.

Could you please provide some insight into the rationale behind this preprocessing strategy? Specifically, I'm curious about the reasons for saving different values for each image and applying custom cropping. Is this approach essential? Additionally, considering there's scaling involved, could you inform me about the image scale at which the kernels are designed to operate, especially if I'm considering doing custom cropping?

Thank you for your time and consideration. I eagerly await your response :)

Thomas,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.