Giter Site home page Giter Site logo

zhaoqxcn / pynq-cnn-attempt Goto Github PK

View Code? Open in Web Editor NEW
23.0 1.0 6.0 1.42 GB

Some attempts to build CNN on PYNQ.

License: MIT License

Tcl 6.16% C 43.71% C++ 9.38% Makefile 0.17% VHDL 23.71% Verilog 13.29% SystemVerilog 0.39% HTML 0.35% C# 0.27% Objective-C 0.03% Batchfile 0.01% Shell 0.01% Jupyter Notebook 2.51% Python 0.03%
pynq hls

pynq-cnn-attempt's Introduction

PYNQ-CNN-ATTEMPT

These are some attempts I made during my undergraduate graduation project.

The hardware platform I use is PYNQ-Z2. The PS part is an Arm CPU running Ubuntu 16.04 LTS, which supports Python. The PL part is the Zynq XC7Z020 FPGA.

The version of Vivado and Vivado HLS is 2018.2.

Any problems,please contact me.

Digilent Vivado IP Library

This is the open source IP library provided by Digilent for video processing. I mainly use its rgb2dvi to implement my HDMI video output module.

HDMI VDMA Test

This is the Vivado project of the HDMI video output test I built. The video data is output from the DDR memory through VDMA. Please see the Ultrasound Image Classification section for details.

Mean Single Convolution

This is the project I built to try the PYNQ development flow before implementing CNN, which realizes hardware acceleration for a single convolution operation.

Minst CNN

This is the project that implements the classification of the Minst dataset.

Ultrasound Image Classification CNN

This project achieved automatic classification of ultrasound images and it is my latest achievement currently. It can read ultrasound image data from the SD card for classification, then synthesize the resulting image and output it through the HDMI port. Due to privacy issues, I only uploaded a small number of images for testing.

pynq-cnn-attempt's People

Contributors

zhaoqxcn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

pynq-cnn-attempt's Issues

AttributeError: Could not find IP or hierarchy configure in overlay

Hi, using the ultra.bit present in the repo for Ultrasound Classification throws this error:
image

Same error is thrown for subsequent blocks also:
image

Am I missing anything! Is it because I did not do anything with the Digilent vivado IP library?

PS: My goal is to classify some other dataset using your CNN i.e. use the same structure( layers and overlay). So, need your guidance
Thanks

Steps for using same netwrok for classification of new data

Hi,
Thanks for your constant support. I am able to run FPGA_CNN successfully.

Now, I want to classify some other data by using the same network structure as yours (CNN_MNIST).
What are the steps to follow to do this?

Do I also need to change something on vivado side or the same overlay(cnn.bit) can be reused irrespective of the data to be classified and just the trained model (h5) needs to be the new one.

Can you please guide me on the steps for having a novel application?

Determining Input and output channel for "test"

Hi @ZhaoqxCN

Can you explain me how did you calculate the input and output channels in this file main.cpp ?

	AXI_VAL status = 0;
	AXI_VAL batch_size = 2;
	AXI_VAL Ker_DIM = 0;
	AXI_VAL In_CH = 1;
	AXI_VAL In_DIM = 28;
	AXI_VAL Out_CH = 10;
	AXI_VAL Out_DIM = 1;

you mentioned batch_size is 2, it represents the two convolution layers?

installing keras and tensorflow

Hi, thanks for such a great project. But, while running the ultasound classification notebook, keras is needed. But, after installing keras. It throws this error:

image

Even after searching through the support forums, I could not install tensorflow.
I am using pynq-z2.

Please help me. How did you install all these libraries? or can you share the keras+ tensorflow installed pynq z2 image if possible.

Really awaiting your reply.
Thanks

Data loading issue

Hi @ZhaoqxCN

I have modified your cnn_minst according to my algorithm. when i give the input it keeps running without showing result.

can you clear some doubts regarding in HLS?

My assumptions :

  1. #pragma HLS array_partition variable = B block factor = KerDim dim = 4 . you have mentioned dimension 4 in all the examples. it means the square matrix represents that?
  2. #pragma HLS array_partition variable = A block factor = InCH dim = 1 you have only one pooling layer, you used 1 dimesion?
  3. #pragma HLS array_partition variable = A block factor = InCH/16 dim = 1 #pragma HLS array_partition variable = B block factor = InCH/16 dim = 2 this i dont understand.

Can you please clear this doubts. it would be really helpful to me?

Thank you.

Low accuracy on FPGA

Hello,
I tried generating bitstreams through cnn.tcl for mnist classification and mean single convolution:
Screenshot (7)

There are no errors in the console apart from few critical clock skew warnings:

image

But, when I use the generated bitstreams, results are very poor:

  • Mean Single convolution

image

  • MNIST Classifcation - low FPGA accuracy

image

Please guide me what am I doing wrong. This is very crucial as I need to accelerate my own cnn for which I am using your work as a reference.

Thanks

driver_files

I couldn't get the driver files after the completion of the export RTL. so while loading ip into vivado tool. I got this error shown in the figure. And not able to generate .bit and .tcl file. Please suggest the solution for it. I am new to FPGA.
Thankyou,
Best regards

Block Diagram issue

Hi @ZhaoqxCN

Can you post your vivado block digaram pdf here. I can't see my cnn ip in PYNQ. if you could give some suggestions that would be great.

Please see the attached block diagram.
design_1.pdf

Thanks

parameter quant_scale in mnist

Can you tell me why the quant_scale=116 in FPGA_CNN_INT8.ipynb? I changed it to 64 or other numbers the accuracy would only achieve 40%. Thank you.

question about the mnist_cnn_model_int8.h5

I find that the file mnist_cnn_model_int8.h5 is same as the mnist_cnn_model.h5, and the weight in it is still float32. Don’t you need to quantify the model to int8? In the FPGA_CNN_INT8.ipynb, you use the overlay.memory.loadweight to load the weight to on-chip memory. Will the method automatically change the type from float32 to int6 by multiplying the quant_scale? Where can I know the actual code in the overlay.memory.loadweight instead of the API?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.